Home
Jobs

3345 Databricks Jobs - Page 46

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

4-6 years of experience with data analytics Skilled in Databricks using SQL Working knowledge of Snowflake and Python Hands-on experience on large datasets and data structures using SQL Experience working with financial and/or alternative data products Excellent analytical and strong problem-solving skills Exposure on S&P Capital IQ Exposure to data models on Databricks Education: B.E./B.Tech in Computer Science or related field Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Publicis Sapient is looking for Python developers to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Key Responsibility Statement Design and develop scalable PySpark data pipelines to ensure efficient processing of large datasets, enabling faster insights and business decision-making. Leverage Databricks notebooks for collaborative data engineering and analytics, improving team productivity and reducing development cycle times. Write clean, modular, and reusable Python code to support data transformation and enrichment, ensuring maintainability and reducing technical debt. Implement data quality checks and validation logic within ETL workflows to ensure trusted data is delivered for downstream analytics and reporting. Optimize Spark jobs for performance and cost-efficiency by tuning partitions, caching strategies, and cluster configurations, resulting in reduced compute costs. Qualifications Your Skills & Experience: Solid understanding of Python programming fundamentals, especially in building modular, efficient, and testable code for data processing. Familiarity with libraries like pandas, NumPy, and SQLAlchemy (for lightweight transformations or metadata management). Proficient in writing and optimizing PySpark code for large-scale distributed data processing. Deep knowledge of Spark internals, partitioning, shuffling, lazy evaluation, and performance tuning. Comfortable using Databricks notebooks, clusters, and Delta Lake. Additional Information Set Yourself Apart With Familiarity with cloud-native services like AWS S3, EMR, Glue, Lambda, or Azure Data Factory. Experience deploying or integrating pipelines within a cloud environment adds flexibility and scalability. Experience with tools like Great Expectations or custom-built validation logic to ensure data trustworthiness. A Tip From The Hiring Manager This person should be highly organized, adapt quickly to change, and thrive in a fast-paced organization. This is a job for the curious, make-things-happen kind of person. Someone who thinks like an entrepreneur and can motivate and move their team to achieve and drive impact. Benefits Of Working Here Gender-Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Employee Assistance Programs to help you in wellness and well-being. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 5-9 years of Strong experience in R programming and package development Proficiency with GitHub and unit testing frameworks. Strong documentation and communication skills. A background or work experience in biostatistics or a similar discipline (Preferred). Expert knowledge in Survival Analysis (Preferred) Statistical model deployment, and end-to-end MLOps is nice to have. Having worked extensively on cloud infrastructure, preferably Databricks and Azure. Shiny development is nice to have. Can work with customer stakeholders to understand business processes and workflows and can design solutions to optimize processes via streamlining and automation. DevOps experience and familiarity with software release process. Familiar with agile delivery methods. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and also collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Senior Data Functional Consultant - Fully Remote - 6 Month Contract Role: Senior Data Functional Consultant Client Location: Dubai Work Location: Fully Remote Duration: 6 Months extendable Monthly Rate: $2000 USD Our Dubai based client are seeking a proactive and business-oriented Data Functional Consultant with strong experience in Azure Data Factory and Azure Databricks. This role bridges the gap between business stakeholders and technical teams—translating business needs into scalable data solutions, ensuring effective data management, and enabling insights-driven decision-making. The ideal candidate is not a pure developer or data engineer but someone who understands business processes, data flows, and stakeholder priorities, and can help drive value from data platforms using cloud-native Azure services. Experience Required: • Proven exposure to Azure Data Factory (ADF) for orchestrating data workflows. • Practical experience with Azure Databricks for data processing (functional understanding, not necessarily coding). • Strong understanding of data warehousing, data modeling, and business KPIs. • Experience working in agile or hybrid project environments. • Excellent communication and stakeholder management skills. • Ability to translate complex technical details into business-friendly language. • Familiarity with tools like Power BI, Excel, or other reporting solutions is a plus. • Background in Banking, Finance industries is a bonus. Requirements: • Collaborate closely with business stakeholders to gather, understand, and document functional data requirements Translate business needs into high-level data design, data workflows, and process improvements. • Work with data engineering teams to define and validate ETL/ELT logic and data pipeline workflows using Azure Data Factory and Databricks. • Facilitate functional workshops and stakeholder meetings to align on data needs and business KPIs. • Act as a bridge between business teams and data engineers to ensure accurate implementation and delivery of data solutions. • Conduct data validation, UAT, and support users in adopting data platforms and self-service analytics. • Maintain functional documentation, data dictionaries, and mapping specifications. • Assist in defining data governance, data quality, and master data management practices from a business perspective. • Monitor data pipeline health and help triage issues from a functional/business impact standpoint. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Responsibilities Design and developing complex applications. An innovative, result-orientated individual, seeking challenges in order to utilize the knowledge and experience they have gained working across a number of clients. Development of real-time, multi-threaded application. Desired Skills and Experience Candidate Profile  5+ years of industry experience in software development using Java, Spring Boot and SQL.  Proficient in using Java 8 features such as lambda expressions, streams, and functional interfaces.  Experience with newer versions of Java and their enhancements.  Strong understanding and practical experience with various data structures (arrays, linked lists, stacks, queues, trees, graphs) and algorithms (sorting, searching, dynamic programming, etc.).  Experience in full software development lifecycle (SDLC) including requirements gathering, design, coding, testing, and deployment.  Familiar with Spring, Hibernate, Maven, Gradle, and other Java-related frameworks and tools.  Proficient in SQL and experience with databases like MySQL, PostgreSQL, or Oracle.  Experience working with technologies such as Kafka, MongoDB, Apache Spark/DataBricks, and Azure Cloud  Good experience of API/Microservices, Publisher/Subscriber and related data integration patterns  Having experience in Unit Testing with Junit or any other similar framework  Strong understanding of OOP and Design Patterns  Working with users, senior management and stake holders across multiple disciplines  Mentoring and developing technical colleagues.  Code management knowledge (e.g., version control, code branching & merging, continuous integration & delivery, build & deployment strategies, testing lifecycle)  Experience in managing stakeholder expectations (client and project team) and generating relevant reports.  Excellent project tracking and monitoring skills  Good decision making and problem-solving skills.  Adaptable, flexible and ability to prioritize and work in tight schedules.  Ability to manage pressure, ambiguity and change.  Good understanding of all knowledge areas in software development including requirement gathering, designing, development, testing, maintenance, quality control etc.  Preferred experience with Agile methodology and knowledge of Financial Services/Asset Management Industry  Ensure quality of deliverables within project timelines  Independently manage daily client communication, especially over calls  Drives the work towards completion with accuracy and timely deliverables.  Good to have Financial Services knowledge Key Responsibilities A candidate needs to interact with the global financial clients regularly and will be responsible for final delivery of work including:  Translate client requirements into actionable software solutions.  Understand the business requirements from the customers.  Direct and manage project development from beginning to end.  Effectively communicate project expectations to team members in a timely and clear manner  Communicate with relevant stakeholders on an ongoing basis.  Identify and manage project dependencies and critical path.  Guide the team to implement industry best practices.  Working as a part of a team developing new enhancement and revamping the existing trade limit persistence and pre trade risk check micro services (LMS) based on the clients own low latency framework.  Designing and developing the persistence cache layer which will use the MONGO persistence for storing  Design and development work for SMS integration to send out the 2FA code and for other business reasons  Migrating existing Couchbase DB based limit documents processing system to a new AMPS based processing micro service.  Design and implement the system from scratch & build enhancements, features request using Java and Springboot  Build prototype of application & solution as needed.  Involve in both development & maintenance of the systems.  Work collaboratively in a global setting, should be eager to learn new technologies.  Provide support for any implemented solutions including incident, problem, and defect management, and appropriately cross train other members so that they are able to support the solutions.  Responsible for extending and maintaining existing codebase with focus on quality, re-usability, maintainability and consistency  Independently troubleshoot difficult and complex issues on production and other environments  Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic.  Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Salary range: Rs 2200000 - Rs 2400000 (ie INR 22-24 LPA) Min Experience: 5 years Location: Pune, Bengaluru, Chennai, Kolkata, Gurgaon JobType: full-time We are looking for an experienced Snowflake Developer to join our Data Engineering team. The ideal candidate will possess a deep understanding of Data Warehousing , SQL , ETL tools like Informatica , and visualization platforms such as Power BI . This role involves building scalable data pipelines, optimizing data architectures, and collaborating with cross-functional teams to deliver impactful data solutions. Requirements Key Responsibilities Data Engineering & Warehousing: Leverage over 5 years of hands-on experience in Data Engineering with a focus on Data Warehousing and Business Intelligence. Pipeline Development: Design and maintain ELT pipelines using Snowflake, Fivetran, and DBT to ingest and transform data from multiple sources. SQL Development: Write and optimize complex SQL queries and stored procedures to support robust data transformations and analytics. Data Modeling & ELT: Implement advanced data modeling practices including SCD Type-2, and build high-performance ELT workflows using DBT. Requirement Analysis: Partner with business stakeholders to capture data needs and convert them into scalable technical solutions. Data Quality & Troubleshooting: Conduct root cause analysis on data issues, maintain high data integrity, and ensure reliability across systems. Collaboration & Documentation: Collaborate with engineering and business teams. Develop and maintain thorough documentation for pipelines, data models, and processes. Skills & Qualifications Expertise in Snowflake for large-scale data warehousing and ELT operations. Strong SQL skills with the ability to create and manage complex queries and procedures. Proven experience with Informatica PowerCenter for ETL development. Proficiency with Power BI for data visualization and reporting. Hands-on experience with Fivetran for automated data integration. Familiarity with DBT, Sigma Computing, Tableau, and Oracle. Solid understanding of data analysis, requirement gathering, and source-to-target mapping. Knowledge of cloud ecosystems such as Azure (including ADF, Databricks); experience with AWS or GCP is a plus. Experience with workflow orchestration tools like Airflow, Azkaban, or Luigi. Proficiency in Python for scripting and data processing (Java or Scala is a plus). Bachelor's or Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related field. Key Tools & Technologies Snowflake, snowsql, Snowpark SQL, Informatica, Power BI, DBT Python, Fivetran, Sigma Computing, Tableau Airflow, Azkaban, Azure, Databricks, ADF Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Salary range: Rs 2200000 - Rs 2400000 (ie INR 22-24 LPA) Min Experience: 5 years Location: Pune, Bengaluru, Chennai, Kolkata, Gurgaon JobType: full-time We are looking for an experienced Snowflake Developer to join our Data Engineering team. The ideal candidate will possess a deep understanding of Data Warehousing , SQL , ETL tools like Informatica , and visualization platforms such as Power BI . This role involves building scalable data pipelines, optimizing data architectures, and collaborating with cross-functional teams to deliver impactful data solutions. Requirements Key Responsibilities Data Engineering & Warehousing: Leverage over 5 years of hands-on experience in Data Engineering with a focus on Data Warehousing and Business Intelligence. Pipeline Development: Design and maintain ELT pipelines using Snowflake, Fivetran, and DBT to ingest and transform data from multiple sources. SQL Development: Write and optimize complex SQL queries and stored procedures to support robust data transformations and analytics. Data Modeling & ELT: Implement advanced data modeling practices including SCD Type-2, and build high-performance ELT workflows using DBT. Requirement Analysis: Partner with business stakeholders to capture data needs and convert them into scalable technical solutions. Data Quality & Troubleshooting: Conduct root cause analysis on data issues, maintain high data integrity, and ensure reliability across systems. Collaboration & Documentation: Collaborate with engineering and business teams. Develop and maintain thorough documentation for pipelines, data models, and processes. Skills & Qualifications Expertise in Snowflake for large-scale data warehousing and ELT operations. Strong SQL skills with the ability to create and manage complex queries and procedures. Proven experience with Informatica PowerCenter for ETL development. Proficiency with Power BI for data visualization and reporting. Hands-on experience with Fivetran for automated data integration. Familiarity with DBT, Sigma Computing, Tableau, and Oracle. Solid understanding of data analysis, requirement gathering, and source-to-target mapping. Knowledge of cloud ecosystems such as Azure (including ADF, Databricks); experience with AWS or GCP is a plus. Experience with workflow orchestration tools like Airflow, Azkaban, or Luigi. Proficiency in Python for scripting and data processing (Java or Scala is a plus). Bachelor's or Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related field. Key Tools & Technologies Snowflake, snowsql, Snowpark SQL, Informatica, Power BI, DBT Python, Fivetran, Sigma Computing, Tableau Airflow, Azkaban, Azure, Databricks, ADF Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

This role is for one of the Weekday's clients Salary range: Rs 2200000 - Rs 2400000 (ie INR 22-24 LPA) Min Experience: 5 years Location: Pune, Bengaluru, Chennai, Kolkata, Gurgaon JobType: full-time We are looking for an experienced Snowflake Developer to join our Data Engineering team. The ideal candidate will possess a deep understanding of Data Warehousing , SQL , ETL tools like Informatica , and visualization platforms such as Power BI . This role involves building scalable data pipelines, optimizing data architectures, and collaborating with cross-functional teams to deliver impactful data solutions. Requirements Key Responsibilities Data Engineering & Warehousing: Leverage over 5 years of hands-on experience in Data Engineering with a focus on Data Warehousing and Business Intelligence. Pipeline Development: Design and maintain ELT pipelines using Snowflake, Fivetran, and DBT to ingest and transform data from multiple sources. SQL Development: Write and optimize complex SQL queries and stored procedures to support robust data transformations and analytics. Data Modeling & ELT: Implement advanced data modeling practices including SCD Type-2, and build high-performance ELT workflows using DBT. Requirement Analysis: Partner with business stakeholders to capture data needs and convert them into scalable technical solutions. Data Quality & Troubleshooting: Conduct root cause analysis on data issues, maintain high data integrity, and ensure reliability across systems. Collaboration & Documentation: Collaborate with engineering and business teams. Develop and maintain thorough documentation for pipelines, data models, and processes. Skills & Qualifications Expertise in Snowflake for large-scale data warehousing and ELT operations. Strong SQL skills with the ability to create and manage complex queries and procedures. Proven experience with Informatica PowerCenter for ETL development. Proficiency with Power BI for data visualization and reporting. Hands-on experience with Fivetran for automated data integration. Familiarity with DBT, Sigma Computing, Tableau, and Oracle. Solid understanding of data analysis, requirement gathering, and source-to-target mapping. Knowledge of cloud ecosystems such as Azure (including ADF, Databricks); experience with AWS or GCP is a plus. Experience with workflow orchestration tools like Airflow, Azkaban, or Luigi. Proficiency in Python for scripting and data processing (Java or Scala is a plus). Bachelor's or Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related field. Key Tools & Technologies Snowflake, snowsql, Snowpark SQL, Informatica, Power BI, DBT Python, Fivetran, Sigma Computing, Tableau Airflow, Azkaban, Azure, Databricks, ADF Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description Description of role and key responsibilities The candidate will be required to deliver to all stages of the data engineering process – data ingestion, transformation, data modelling and data warehousing, and build self-service data products. The role is a mix of Azure cloud delivery and on-prem (SQL) development. Ultimately all on-prem will be migrated to cloud and decommissioned – but we are only part way along that journey. There will be a dual reporting line between the main business technology area (Asset Lending) providing the day-to-day direction and management of work items, and the Head of Data for Corporate Banking Technology – who will provide guidance on overall Data strategy and alignment with wider bank. The role itself will work closely with our Architect, Engineering lead, Analytics team, DevOps, DBAs, and upstream Application teams in Asset Finance, Working Capital and ABL. Specifically, The Person Will Work closely with end-users and Data Analysts to understand the business and their data requirements Carry out ad hoc data analysis and ‘data wrangling’ using Synapse Analytics and Databricks Building dynamic meta-data driven data ingestion patterns using Azure Data Factory and Databricks Build and maintain the Enterprise Data Warehouse (using Data Vault 2.0 methodology) Build and maintain business focused data products and data marts Build and maintain Azure Analysis Services databases and cubes Share support and operational duties within the wider engineering and data teams Work with Architecture and Engineering teams to deliver on these projects. and ensure that supporting code and infrastructure follows best practices outlined by these teams. Help define test criteria to establish clear conditions for success and ensure alignment with business objectives. Manage their user stories and acceptance criteria through to production into day-to-day support Assist in the testing and validation of new requirements and processes to ensure they meet business needs Stay up-to-date with industry trends and best practices in data engineering Core skills and knowledge Excellent data analysis and exploration using T-SQL Strong SQL programming (stored procedures, functions) Extensive experience with SQL Server and SSIS Knowledge and experience of data warehouse modelling methodologies (Kimball, dimensional modelling, Data Vault 2.0) Experience in Azure – one or more of the following: Data Factory, Databricks, Synapse Analytics, ADLS Gen2 Experience in building robust and performant ETL processes Build and maintain Analysis Services databases and cubes (both multidimensional and tabular) Experience in using source control & ADO Understanding and experience of deployment pipelines Excellent analytical and problem-solving skills, with the ability to think critically and strategically. Strong communication and interpersonal skills, with the ability to engage and influence stakeholders at all levels. To always act with integrity and embrace the philosophy of treating our customers fairly Analytical, ability to arrive at solutions that fit current / future business processes Effective writing and verbal communication Organisational skills: Ability to effectively manage and co-ordinate themselves. Ownership and self-motivation Delivery focus Assertive, resilient and persistent Team oriented Deal well with pressure and highly effective at multi-tasking and juggling priorities Any other attributes that would be helpful, but not essential for the role. Deeper programming ability (C#, .Net Core) Build ‘infrastructure-as-code’ deployment pipelines Asset Finance knowledge Vehicle Finance knowledge ABL and Working Capital knowledge Any financial services and banking experience Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title : Azure Data Total IT Experience (in Yrs.)- 4 to 7 Location - Indore / Pune Relevant Experience Required (in Yrs.) 3+ years direct experience in analyzing and deriving source systems, data governance, metadata management, data architecture, data quality and metadata related output Strong experience in different type of “Data Analysis” covering business data, metadata, master data, analytical data. Language Requirement English Key words to search in resume Databricks, AZURE Data Factory Technical/Functional Skills -MUST HAVE SKILLS 3+ years hands-on experience with Databricks This role will be responsible for conducting assessment of the existing systems in the land scape Devise a strategy for SAS to Databricks migration activities Work out on a plan to perform the above said activities Work closely with customer on daily basis and present the progress made and the plan of action Interact with onsite and offshore cognizant associates to ensure that the project deliverables are on track Secondary Skills Data Management solutions with capabilities, such as Data Ingestion, Data Curation, Metadata and Catalog, Data Security, Data Modeling, Data Wrangling Responsibilities Hands on experience in installing, Configuring and using MS Azure Data bricks and Hadoop ecosystem components like DBFS, Parquet, Delta Tables, HDFS, Map Reduce programming, Kafka, Spark & Event Hub. In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib. Hands on experience in Scripting languages like Scala & Python. Hands on experience in Analysis, Design, Coding & Testing phases of SDLC with best practices. Expertise in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair. Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala. Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Purpose As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to be directly working on processes that develop the inputs to our models. Migrate from MATLAB to Databricks moving to a more modern approach to update processes Essential Skills Desired Skills and Experience 4-6 years of experience with data analytics Skilled in Databricks using SQL Working knowledge of Snowflake and Python Hands-on experience on large datasets and data structures using SQL Experience working with financial and/or alternative data products Excellent analytical and strong problem-solving skills Exposure on S&P Capital IQ Exposure to data models on Databricks Education: B.E./B.Tech in Computer Science or related field Key Responsibilities Ability to write data processes in Databricks using SQL. Develop ELT processes for data preparation SQL expertise to understand data sources and data structures Document the developed data processes. Assist with related data tasks for model inputs within the Databricks environment. Assist with data tasks for model inputs within Databricks environment Taking data from S&P Capital IQ, prepping it, and getting it ready for the model Key Metrics SQL, Databricks, Snowflake S&P Capital IQ, Data Structures Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We're hiring for one of the world's leading professional services firms, renowned for its commitment to innovation, excellence, and global impact. With a presence in over 150 countries, this organization provides services across consulting, audit, tax, risk advisory, and financial advisory — helping Fortune 500 companies and governments navigate complex challenges. Job Title: Big Data Developer Employment Type: Full-Time Employee (FTE) Location: PAN India Experience: 6+ years About the Role: We are seeking a highly skilled Big Data Developer with strong expertise in Spark and Scala to join our dynamic team. The ideal candidate will have hands-on experience with cloud platforms such as AWS, Azure, or GCP for big data processing and storage solutions. You will play a critical role in designing, developing, and maintaining scalable data pipelines and backend services using modern big data technologies. Key Responsibilities: Develop, optimize, and maintain large-scale data processing pipelines using Apache Spark and Scala Implement and manage cloud-based big data storage and processing solutions on Azure Data Lake Storage (DLS) and Azure Databricks Collaborate with cross-functional teams to understand data requirements and deliver scalable backend services using Java and Spring Boot framework Ensure best practices in data security, performance optimization, and code quality Troubleshoot and resolve production issues related to big data workflows and backend services Continuously evaluate emerging technologies and propose enhancements to current systems Must-Have Qualifications: 6+ years of experience in Big Data development Strong expertise in Apache Spark and Scala for data processing Hands-on experience with cloud platforms such as AWS, Azure, or GCP, with a strong focus on Azure Data Lake Storage (DLS) and Azure Databricks Proficient in backend development using Java and Spring Boot framework Experience in designing and implementing scalable and fault-tolerant data pipelines Solid understanding of big data architectures, ETL processes, and data modeling Excellent problem-solving skills and ability to work in an agile environment Preferred Skills: Familiarity with containerization and orchestration tools like Docker and Kubernetes Knowledge of streaming technologies such as Kafka Experience with CI/CD pipelines and automated testing frameworks What We Offer: Competitive salary of based on experience and skills Flexible working options with PAN India presence Opportunity to work with cutting-edge big data technologies in a growing and innovative company Collaborative and supportive work culture with career growth opportunities Apply for this job Show more Show less

Posted 1 week ago

Apply

0.0 - 6.0 years

0 Lacs

Coimbatore, Tamil Nadu

On-site

Indeed logo

Job Title: Senior Data EngineerLocation: Coimbatore Experience: 5+ Years Job Type: Full-Time Key Responsibilities Design, develop, and maintain robust data pipelines using Airflow and AWS services. Implement and manage data warehousing using Databricks and PostgreSQL. Automate recurring tasks using Git and Jenkins. Build and optimize ETL processes leveraging AWS tools like S3, Lambda, AppFlow, and DMS. Create interactive dashboards and reports using Looker. Collaborate with various teams to ensure seamless integration of data infrastructure. Ensure the performance, reliability, and scalability of data systems. Use Jenkins for CI/CD and task automation. Required Skills & Expertise Experience as a senior individual contributor on data-heavy projects. Strong command of building data pipelines using Python and PySpark. Expertise in relational database modeling, ideally with time-series data. Proficiency in AWS services such as S3, Lambda, and Airflow. Hands-on experience with SQL and database scripting. Familiarity with Databricks and ThoughtSpot. Experience using Jenkins for automation. Nice to Have Proficiency in data analytics/BI tools such as Power BI, Tableau, Looker, or ThoughtSpot. Experience with AWS Glue, AppFlow, and data transfer services. Exposure to Terraform for infrastructure-as-code. Experience in data quality testing. Previous interaction with U.S.-based stakeholders. Strong ability to work independently and lead tasks effectively. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. 5+ years of relevant experience. Tech Stack Databricks PostgreSQL Python & PySpark AWS Stack (S3, Lambda, Airflow, DMS, etc.) Power BI / Tableau / Looker / ThoughtSpot Git / Jenkns / CI-CD tools Job Type: Full-time Pay: ₹500,000.00 - ₹2,500,000.00 per year Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Experience: Data Engineer: 6 years (Required) Work Location: In person

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Looking for someone: Role: Data Engineer Senior : 7-10 Year Location: Bangalore/ Hyderabad/ Pune Skills: Python, SQL, PySpark, Azure Databricks, Data Pipelines Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We're hiring for one of the world's leading professional services firms, renowned for its commitment to innovation, excellence, and global impact. With a presence in over 150 countries, this organization provides services across consulting, audit, tax, risk advisory, and financial advisory — helping Fortune 500 companies and governments navigate complex challenges. Job Title: Big Data Developer Employment Type: Full-Time Employee (FTE) Location: PAN India Experience: 6+ years About the Role: We are seeking a highly skilled Big Data Developer with strong expertise in Spark and Scala to join our dynamic team. The ideal candidate will have hands-on experience with cloud platforms such as AWS, Azure, or GCP for big data processing and storage solutions. You will play a critical role in designing, developing, and maintaining scalable data pipelines and backend services using modern big data technologies. Key Responsibilities: Develop, optimize, and maintain large-scale data processing pipelines using Apache Spark and Scala Implement and manage cloud-based big data storage and processing solutions on Azure Data Lake Storage (DLS) and Azure Databricks Collaborate with cross-functional teams to understand data requirements and deliver scalable backend services using Java and Spring Boot framework Ensure best practices in data security, performance optimization, and code quality Troubleshoot and resolve production issues related to big data workflows and backend services Continuously evaluate emerging technologies and propose enhancements to current systems Must-Have Qualifications: 6+ years of experience in Big Data development Strong expertise in Apache Spark and Scala for data processing Hands-on experience with cloud platforms such as AWS, Azure, or GCP, with a strong focus on Azure Data Lake Storage (DLS) and Azure Databricks Proficient in backend development using Java and Spring Boot framework Experience in designing and implementing scalable and fault-tolerant data pipelines Solid understanding of big data architectures, ETL processes, and data modeling Excellent problem-solving skills and ability to work in an agile environment Preferred Skills: Familiarity with containerization and orchestration tools like Docker and Kubernetes Knowledge of streaming technologies such as Kafka Experience with CI/CD pipelines and automated testing frameworks What We Offer: Competitive salary of based on experience and skills Flexible working options with PAN India presence Opportunity to work with cutting-edge big data technologies in a growing and innovative company Collaborative and supportive work culture with career growth opportunities Apply for this job Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: 1. Design, develop, test, and maintain scalable ETL data pipelines using Python. 2. Perform data ingestion from various sources and apply transformation & cleansing logic to ensure high-quality data delivery. 3. Implement and enforce data quality checks, validation rules, and monitoring. 4. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. 5. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. 6 . Write complex SQL queries for data extraction and validation from relational databases such as SQL Server , Oracle , or PostgreSQL 7. Document pipeline designs, data flow diagrams, and operational support procedures. 8. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data process. Cloud Functions for lightweight serverless compute. BigQuery for data warehousing and analytics. Cloud Composer for orchestration of data workflows ( Apache Airflow.) Google Cloud Storage (GCS) for managing data at scale. IAM for access control and secure. Cloud Run for containerized applications Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture , data integration , and transformation techniques . Experience in working with version control systems like GitHub and knowledge of CI/CD practices . Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We're hiring for one of the world's leading professional services firms, renowned for its commitment to innovation, excellence, and global impact. With a presence in over 150 countries, this organization provides services across consulting, audit, tax, risk advisory, and financial advisory — helping Fortune 500 companies and governments navigate complex challenges. Job Title: Big Data Developer Employment Type: Full-Time Employee (FTE) Location: PAN India Experience: 6+ years About the Role: We are seeking a highly skilled Big Data Developer with strong expertise in Spark and Scala to join our dynamic team. The ideal candidate will have hands-on experience with cloud platforms such as AWS, Azure, or GCP for big data processing and storage solutions. You will play a critical role in designing, developing, and maintaining scalable data pipelines and backend services using modern big data technologies. Key Responsibilities: Develop, optimize, and maintain large-scale data processing pipelines using Apache Spark and Scala Implement and manage cloud-based big data storage and processing solutions on Azure Data Lake Storage (DLS) and Azure Databricks Collaborate with cross-functional teams to understand data requirements and deliver scalable backend services using Java and Spring Boot framework Ensure best practices in data security, performance optimization, and code quality Troubleshoot and resolve production issues related to big data workflows and backend services Continuously evaluate emerging technologies and propose enhancements to current systems Must-Have Qualifications: 6+ years of experience in Big Data development Strong expertise in Apache Spark and Scala for data processing Hands-on experience with cloud platforms such as AWS, Azure, or GCP, with a strong focus on Azure Data Lake Storage (DLS) and Azure Databricks Proficient in backend development using Java and Spring Boot framework Experience in designing and implementing scalable and fault-tolerant data pipelines Solid understanding of big data architectures, ETL processes, and data modeling Excellent problem-solving skills and ability to work in an agile environment Preferred Skills: Familiarity with containerization and orchestration tools like Docker and Kubernetes Knowledge of streaming technologies such as Kafka Experience with CI/CD pipelines and automated testing frameworks What We Offer: Competitive salary of based on experience and skills Flexible working options with PAN India presence Opportunity to work with cutting-edge big data technologies in a growing and innovative company Collaborative and supportive work culture with career growth opportunities Apply for this job Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Function: Data Science Job: Machine Learning Engineer Position: Senior Immediate manager (N+1 Job title and name): AI Manager Additional reporting line to: Global VP Engineering Position location: Mumbai, Pune, Bangalore, Hyderabad, Noida. 1. Purpose of the Job – A simple statement to identify clearly the objective of the job. The Senior Machine Learning Engineer is responsible for designing, implementing, and deploying scalable and efficient machine learning algorithms to solve complex business problems. The Machine Learning Engineer is also responsible of the lifecycle of models, once deployed in production environments, through monitoring performance and model evolution. The position is highly technical and requires an ability to collaborate with multiple technical and non-technical profiles (data scientists, data engineers, data analysts, product owners, business experts), and actively take part in a large data science community. 2. Organization chart – Indicate schematically the position of the job within the organization. It is sufficient to indicate one hierarchical level above (including possible functional boss) and, if applicable, one below the position. In the horizontal direction, the other jobs reporting to the same superior should be indicated. A Machine Learning Engineer reports to the AI Manager who reports to the Global VP Engineering. 3. Key Responsibilities and Expected Deliverables– This details what actually needs to be done; the duties and expected outcomes. Managing the lifecycle of machine learning models Develop and implement machine learning models to solve complex business problems. Ensure that models are accurate, efficient, reliable, and scalable. Deploy machine learning models to production environments, ensuring that models are integrated with software systems. Monitor machine learning models in production, ensuring that models are performing as expected and that any errors or performance issues are identified and resolved quickly. Maintain machine learning models over time. This includes updating models as new data becomes available, retraining models to improve performance, and retiring models that are no longer effective. Develop and implement policies and procedures for ensuring the ethical and responsible use of machine learning models. This includes addressing issues related to bias, fairness, transparency, and accountability. Development of data science assets Identify cross use cases data science needs that could be mutualised in a reusable piece of code. Design, contribute and participate in the implementation of python libraries answering a data science transversal need that can be reused in several projects. Maintain existing data science assets (timeseries forecasting asset, model monitoring asset) Create documentation and knowledge base on data science assets to ensure a good understanding from users. Participate to asset demos to showcase new features to users. Be an active member of the Sodexo Data Science Community Participate to the definition and maintenance of engineering standards and set of good practices around machine learning. Participate to data science team meeting and regularly share knowledge, ask questions, and learn from others. Mentor and guide junior machine learning engineers and data scientists. Participate to internal or external relevant conferences and meet ups. Continuous Improvements Stay up to date with the latest developments in the field: read research papers, attend conferences, and participate in trainings to expand their knowledge and skills. Identify and evaluate new technologies and tools that can improve the efficiency and effectiveness of machine learning projects. Propose and implement optimizations for current machine learning workflows and systems. Proactively identify areas of improvement within the pipelines. Make sure that created code is compliant with our set of engineering standards. Collaboration with other data experts (Data Engineers, Platform Engineers, and Data Analysts) Participate to pull requests reviews coming from other team members. Ask for review and comments when submitting their own work. Actively participate to the day-to-day life of the project (Agile rituals), the data science team (DS meeting) and the rest of the Global Engineering team 4. Education & Experience – Indicate the skills, knowledge and experience that the job holder should require to conduct the role effectively Engineering Master’s degree or PhD in Data Science, Statistics, Mathematics, or related fields 5 years+ experience in a Data Scientist / Machine Learning Engineer role into large corporate organizations Experience of working with ML models in a cloud ecosystem Statistics & Machine Learning Statistics : Strong understanding of statistical analysis and modelling techniques (e.g., regression analysis, hypothesis testing, time series analysis) Classical ML : Very strong knowledge in classical ML algorithms for regression & classification, supervised and unsupervised machine learning, both theoretical and practical (e.g. using scikit-learn, xgboost) ML niche: Expertise in at least one of the following ML specialisations: Timeseries forecasting / Natural Language Processing / Computer Vision Deep Learning: Good knowledge of Deep Learning fundamentals (CNN, RNN, transformer architecture, attention mechanism, …) and one of the deep learning frameworks (pytorch, tensorflow, keras) Generative AI: Good understanding of Generative AI specificities and previous experience in working with Large Language Models is a plus (e.g. with openai, langchain) MLOps Model strategy : Expertise in designing, implementing, and testing machine learning strategies. Model integration : Very strong skills in integrating a machine learning algorithm in a data science application in production. Model performance: Deep understanding of model performance evaluation metrics and existing libraries (e.g., scikit-learn, evidently) Model deployment: Experience in deploying and managing machine learning models in production either using specific cloud platform, model serving frameworks, or containerization. Model monitoring : Experience with model performance monitoring tools is a plus (Grafana, Prometheus) Software Engineering Python: Very strong coding skills in Python including modularity, OOP, data & config manipulation frameworks (e.g., pandas, pydantic) etc. Python ecosystem: Strong knowledge of tooling in Python ecosystem such as dependency management tooling (venv, poetry), documentation frameworks (e.g. sphinx, mkdocs, jupyter-book), testing frameworks (unittest, pytest) Software engineering practices: Experience in putting in place good software engineering practices such as design patterns, testing (unit, integration), clean code, code formatting etc. Debugging : Ability to troubleshoot and debug issues within machine learning pipelines Data Science Experimentation and Analytics Data Visualization : Knowledge of data visualization tools such as plotly, seaborn, matplotlib, etc. to visualise, interpret and communicate the results of machine learning models to stakeholders. Basic knowledge of PowerBI is a plus Data Cleaning : Experience with data cleaning and preprocessing techniques such as feature scaling, dimensionality reduction, and outlier detection (e.g. with pandas, scikit-learn). Data Science Experiments : Understanding of experimental design and A/B testing methodologies Data Processing: Databricks/Spark : Basic knowledge of PySpark for big data processing Databases : Basic knowledge of SQL to query data in internal systems Data Formats : Familiarity with different data storage formats such as Parquet and Delta DevOps Azure DevOps : Experience using a DevOps platform such as Azure DevOps for using Boards, Repositories, Pipelines Git: Experience working with code versioning (git), branch strategies, and collaborative work with pull requests. Proficient with the most basic git commands. CI / CD : Experience in implementing/maintaining pipelines for continuous integration (including execution of testing strategy) and continuous deployment is preferable. Cloud Platform: Azure Cloud : Previous experience with services like Azure Machine Learning Services and/or Azure Databricks on Azure is preferable. Soft skills Strong analytical and problem-solving skills, with attention to detail Excellent verbal and written communication and pedagogical skills with technical and non-technical teams Excellent teamwork and collaboration skills Adaptability and reactivity to new technologies, tools, and techniques Fluent in English 5. Competencies – Indicate which of the Sodexo core competencies and any professional competencies that the role requires Communication & Collaboration Adaptability & Agility Analytical & technical skills Innovation & Change Rigorous Problem Solving & Troubleshooting Show more Show less

Posted 1 week ago

Apply

12.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Associate Director - AI/ML Engineering What you will do Let’s do this. Let’s change the world. We are seeking a Associate Director of ML / AI Engineering to lead Amgen India’s AI engineering practice. This role is integral to developing top-tier talent, setting ML / AI best practices, and evangelizing ML / AI Engineering capabilities across the organization. The Associate Director will be responsible for driving the successful delivery of strategic business initiatives by overseeing the technical architecture, managing talent, and establishing a culture of excellence in ML / AI The key aspects of this role involve : (1) prior hands-on experience building ML and AI solutions (2) management experience in leading ML / AI engineering team and talent development (3) Delivering AI initiatives at enterprise scale Roles & Responsibilities: Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions Create and implement a strategy for expanding the AI/ML engineering team, including recruitment, onboarding, and talent development. Oversee the end-to-end lifecycle of AI/ML projects, from concept and design through to deployment and optimization, ensuring timely and successful delivery. Ensure adoption of ML-Ops best practices, including model versioning, testing, deployment, and monitoring. Collaborate with multi-functional teams, including product, data science, and software engineering, to find opportunities and deliver AI/ML solutions that drive business value. Serve as an AI/ML evangelist across the organization, promoting awareness and understanding of the capabilities and value of AI/ML technologies. Promote a culture of innovation and continuous learning within the team, encouraging the exploration of new tools, technologies, and methodologies. Provide technical leadership and mentorship, guiding engineers in implementing scalable and robust AI/ML systems. Work closely with collaborators to prioritize AI/ML projects and ensure timely delivery of key initiatives. Lead innovation initiatives to explore new AI/ML technologies, platforms, and tools that can drive further advancements in the organization’s AI capabilities. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 12 to 14 years of computer science, Artificial Intelligence, Machine Learning experience OR Bachelor’s degree and 14 to 18 years of computer science, Artificial Intelligence, Machine Learning experience OR Diploma and 18 to 20 years of computer science, Artificial Intelligence, Machine Learning experience Preferred Qualifications: Experience in building AI Platforms & applications at enterprise scale Expertise in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, etc. Hands-on experience with LLMs, Generative AI, and NLP (e.g., GPT, BERT, Llama, Claude, Mistral AI ) Strong understanding of MLOps processes and tools such as MLflow, Kubeflow, or similar platforms. Proficient in programming languages such as Python, R, or Scala. Experience deploying AI/ML models in cloud environments (AWS, Azure, or Google Cloud). Proven track record of managing and delivering AI/ML projects at scale. Excellent project management skills, with the ability to lead multi-functional teams and manage multiple priorities. Experience in regulated industries, preferably life sciences and pharma Good-to-Have Skills: Experience with natural language processing, computer vision, or reinforcement learning. Knowledge of data governance, privacy regulations, and ethical AI considerations. Experience with cloud-native AI/ML services (Databricks, AWS, Azure ML, Google AI Platforms) Experience with AI Observability Professional Certifications (Preferred): Google Professional Machine Learning Engineer, AWS Certified Machine Learning, or Azure AI Engineer Associate, Databricks Certified Generative AI Engineer Associate Soft Skills: Excellent leadership and communication skills, with the ability to convey complex technical concepts to non-technical collaborators. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Ability to mentor and develop team members, promoting their growth and success. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking an experienced and strategic Data to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, SQL and PySpark, Key Responsibilities: • Design and develop scalable data pipelines using Databricks and Medallion (Bronze, Silver, Gold layers). • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. Requirements: • Bachelor’s or master’s degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe Job Title: Azure Cloud Automation and Data Analytics Engineer Location: Chennai Experience: 5+ yrs Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills Job Title: Azure Cloud Automation and Data Analytics Engineer Profile Summary: We are seeking a dynamic and experienced Azure Cloud Automation and Data Analytics Engineer with a proactive attitude and strong independence. This role requires expertise in scripting and automating processes using IAC / scription, along with proficiency in languages like PowerShell, Python, and Bash. The ideal candidate will also have a background in Data Analytics and SQL. Awareness of security, operations, and networking within Azure is essential. This position combines some aspects of Data Ops, Site Reliability Engineering (SRE), Platform Engineering, and DevOps. We are looking for a candidate who excels in automation and development, rather than merely operational tasks. Project: CONNECT LIFE SQUAD Client responsible: Victor Salesa Sanz Team: CONNECT LIFE SQUAD Way of Working: List of Key Responsibilities: - Design, implement, and manage cloud infrastructure on Azure using Terraform and scripting languages. - Automate deployment, configuration, and management of cloud resources. - Develop and maintain data analytics solutions, including data pipelines and ETL processes. - Write, optimize, and manage SQL queries and databases and be aware of the results - Ensure security and compliance of cloud environments. - Collaborate on designing and implementing networking solutions within Azure. - Conduct performance tuning, troubleshooting, and root cause analysis. - Implement and manage monitoring, logging, and alerting systems. - Being able to interact with other teams in the company to GTD from other teams we depend by tracking of the tickets But at the same time independent to understand what is required. - Participate in on-call rotations and provide support for cloud operations. Technical Knowledge: Technology Level of expertise* Priority Must Nice to have Azure Storage / Azure Services / Azure Permissions 3 1 X Azure Databricks / Spark 3 1 X Azure SQL Server / Databases /SQL 3 1 X Docker and Containers / Azure Container Registry 4 1 X Azure Machine Learning / Airflow / Orchestration tools 4 1 X Azure Devops Pipelines 4 1 X Python PowerShell Bash / Programming languages 4 1 X Terraform 4 1 X OAuth (Authentication in Azure and tokens) 4 2 X REST APIs (Service-oriented in Azure) 4 2 X Soft Skills: - Customer oriented attitude to our customer willing to provide solutions for any challenge our customer is facing by having “all-ears” to any suggestion. This is really something interesting - Strong problem-solving skills and ability to work independently. This is a must - Proactive attitude and excellent communication skills. This is a must - Not fear of asking or taking challenges. Qualifications: - Proven experience in cloud infrastructure management, specifically with Microsoft Azure with scripting approach more than just click-ops - Expertise in scripting and automation using Terraform, PowerShell, Python, and Bash. - Background in Data Analytics, including proficiency in SQL. - Knowledge of security best practices in a cloud environment. - Familiarity with Azure networking concepts and services. - Experience with DevOps practices and tools, including CI/CD pipelines and version control. Entry-level: This refers to individuals who are just starting their careers or have less than 2 years of experience in the field. Junior: Typically, this level represents professionals with 2-4 years of experience in the technology. Mid-level: This level indicates individuals with 5-8 years of experience and who have developed a solid foundation of knowledge and skills in their specific area of expertise. Senior: Senior professionals typically have 8-10+ years of experience and possess advanced knowledge, expertise, and leadership capabilities within their field. Expert: This level represents professionals who are recognized as industry experts and have extensive experience of 10+ years. They are considered authorities in their field and often contribute to the advancement and development of technology through research, innovation, and leadership. List of Used Tools: Described above Additional comments: We need a person that is: - Azure certifications (e.g., Azure Solutions Architect, Azure DevOps Engineer, Azure Security Engineer). Desired but experience is priority over this one - Experience with other cloud platforms (e.g., AWS, Google Cloud) is a plus. Qualifications Bachelor's degree in Computer Science (or related field) Show more Show less

Posted 1 week ago

Apply

0.0 - 5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Experience- 5-10 years JD- Mandatory Skillset: Snowflake, DBT, Data Architecture Design experience in Data Warehouse. Good to have Informatica or any ETL Knowledge or Hands-On Experience Good to have Databricks understanding 5-10 years of IT experience with 2+ years of Data Architecture experience in Data Warehouse,3+ years in Snowflake Responsibilities Design, implement, and manage cloud-based solutions on AWS and Snowflake. Work with stakeholders to gather requirements and design solutions that meet their needs. Develop and execute test plans for new solutions Oversee and design the information architecture for the data warehouse, including all information structures such as staging area, data warehouse, data marts, and operational data stores. Ability to Optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. Deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support Significant experience working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models) Maintain Documentation: Develop and maintain detailed documentation for data solutions and processes. Provide Training: Offer training and leadership to share expertise and best practices with the team Collaborate with the team and provide leadership to the data engineering team, ensuring that data solutions are developed according to best practices Job Type: Full-time Pay: From ₹1,500,000.00 per year Location Type: In-person Schedule: Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your notice period? How many years of experience do you have in Snowflake? How many years of experience do you have in Data Architecture experience in Data Warehouse? What is your current location? Are you ok with the work from office in Hyderabad location? What is your current CTC? What is your expected CTC? Experience: total work: 5 years (Required) Work Location: In person

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies