Home
Jobs

3318 Databricks Jobs - Page 37

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Overview We are looking for an experienced Lead Data Engineer to join our dynamic team. If you are passionate about building scalable software solutions, and work collaboratively with cross-functional teams to define requirements and deliver solutions we would love to hear from you. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Develop and maintain data pipelines and ETL/ELT processes using Python Design and implement scalable, high-performance applications Work collaboratively with cross-functional teams to define requirements and deliver solutions Develop and manage near real-time data streaming solutions using Pub, Sub or Beam Contribute to code reviews, architecture discussions, and continuous improvement initiatives Monitor and troubleshoot production systems to ensure reliability and performance Basic Qualifications: 5+ years of professional software development experience with Python Strong understanding of software engineering best practices (testing, version control, CI/CD) Experience building and optimizing ETL/ELT processes and data pipelines Proficiency with SQL and database concepts Experience with data processing frameworks (e.g., Pandas) Understanding of software design patterns and architectural principles Ability to write clean, well-documented, and maintainable code Experience with unit testing and test automation Experience working with any cloud provider (GCP is preferred) Experience with CI/CD pipelines and Infrastructure as code Experience with Containerization technologies like Docker or Kubernetes Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) Proven track record of delivering complex software projects Excellent problem-solving and analytical thinking skills Strong communication skills and ability to work in a collaborative environment Preferred Qualifications: Experience with GCP services, particularly Cloud Run and Dataflow Experience with stream processing technologies (Pub/Sub) Familiarity with big data technologies (Airflow) Experience with data visualization tools and libraries Knowledge of CI/CD pipelines with Gitlab and infrastructure as code with Terraform Familiarity with platforms like Snowflake, Bigquery or Databricks, GCP Data engineer certification We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description At Rolls-Royce, we look beyond tomorrow. We continually pioneer integrated power and propulsion solutions to deliver cleaner, safer and more competitive power. Rolls-Royce Power Systems is headquartered in Friedrichshafen in southern Germany and employs around 9,000 people. The product portfolio includes MTU-brand high-speed engines and propulsion systems for ships, power generation, heavy land, rail and defense vehicles and for the oil and gas industry as well as diesel and gas systems and battery containers for mission critical, standby and continuous power, combined generation of heat and power, and micro grids. Experienced and highly motivated Project Manager to oversee the Supply, Installation, Testing & commissioning of diesel / Gas generators. The Project Manager will be responsible for ensuring the successful planning, execution, and completion of generator installation projects, ensuring adherence to budget, timeline, quality, and safety standards. The role requires coordination with multiple stakeholders, including internal teams, contractors, vendors, and clients to ensure that the power infrastructure is installed and commissioned efficiently to meet operational requirements. Work with us and we’ll welcome you into an inclusive culture, one that invests in your continuous learning and development, and gives you access to a wide breadth and depth of experience. Internship Program – Key Opportunities And Responsibilities Study literature and develop scripts and algorithm for the defined area of research Perform data and statistical analysis Develop predictive maintenance models and implement alarm systems Automation of tasks related to data logger commissioning Develop models related to reliability, Life cycle cost and reuse rate Ideal Candidate/Qualification Graduate (B.E. / B. Tech.) / postgraduate (MS / ME/ M Tech.) final year in Computer Science, IT, Electronics, Mechatronics or equivalent field. Strong hands-on Python and other coding platforms like Java and R Knowledge machine learning, artificial intelligence is advantageous, sound knowledge of statistic Experienced in cloud computing platforms, preferably Microsoft Azure and Databricks, Good Knowledge on Hadoop 2.0 ecosystem and Data structures Competent in algorithm development and optimization with respect to time and space complexity. Should have sufficient knowledge on stream processing. Working knowledge on PySpark/Spark to handle big data. Knowledge of automation using sripts. Should be able to work in an agile environment, within a self-organizing team. Collaboration and teamwork, with a willingness to share solutions and best practices across teams. Proactive in approach, Ability to apply logical, analytical, and innovative thinking on a range of technical problems. Location – Pune Internship Duration – 6 months We are an equal opportunities employer. We’re committed to developing a diverse workforce and an inclusive working environment. We believe that people from different backgrounds and cultures give us different perspectives which are crucial to innovation and problem solving. We believe the more diverse perspectives we have, the more successful we’ll be. By building a culture of caring and belonging, we give everyone who works here the opportunity to realize their full potential. You can learn more about our global Inclusion strategy at Our people | Rolls-Royce Type of Contract Temporary (Fixed Term) Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Job Description EMPLOYMENT TYPE: Full-Time, Permanent LOCATION: Remote (Pan India) SHIFT TIMINGS: 2.00 pm-11:00 pm IST Budget- As per company standards REPORTING: This position will report to our CEO or any other Lead as assigned by Management. The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up. You will collaborate with our software engineers, database architects, data analysts, and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. What You’ll Be Doing: ● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies. ● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications: ● Bachelor's degree in Engineering, Computer Science, or relevant field. ● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals. ● Deep understanding of Big Data concepts and distributed systems. ● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease. ● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL. ● Cloud Experience with DataBricks ● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON. ● Comfortable working in a linux shell environment and writing scripts as needed. ● Comfortable working in an Agile environment ● Machine Learning knowledge is a plus. ● Must be capable of working independently and delivering stable, efficient and reliable software. ● Excellent written and verbal communication skills in English. ● Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. About The Role We at Coursera are seeking a highly skilled and motivated AI Specialist with expertise in developing and deploying advanced AI solutions. The ideal candidate will have 3+ years of experience, with a strong focus on leveraging AI technologies to derive insights, build predictive models, and enhance platform capabilities. This role offers a unique opportunity to contribute to cutting-edge projects that transform the online learning experience. Key Responsibilities Deploy and customize AI/ML solutions using tools and platforms from Google AI, AWS, or other providers. Develop and optimize customer journey analytics to identify actionable insights and improve user experience. Design, implement, and optimize models for predictive analytics, information extraction, semantic parsing, and topic modelling. Perform comprehensive data cleaning and preprocessing to ensure high-quality inputs for model training and deployment. Build, maintain, and refine AI pipelines for data gathering, curation, model training, evaluation, and monitoring. Analyze large-scale datasets, including customer reviews, to derive insights for improving recommendation systems and platform features. Train and support team members in adopting and managing AI-driven tools and processes. Document solutions, workflows, and troubleshooting processes to ensure knowledge continuity. Stay informed on emerging AI/ML technologies to recommend suitable solutions for new use cases. Evaluate and enhance the quality of video and audio content using AI-driven techniques. Qualifications Education: Bachelor's degree in Computer Science, Machine Learning, or a related field (required). Experience: 3+ years of experience in AI/ML development, with a focus on predictive modelling and data-driven insights. Proven experience in deploying AI solutions using platforms like Google AI, AWS, Microsoft Azure, or similar. Proficiency in programming languages such as Python, Java, or similar for AI tool customization and deployment. Strong understanding of APIs, cloud services, and integration of AI tools with existing systems. Proficiency in building and scaling AI pipelines for data engineering, model training, and monitoring. Experience with frameworks and libraries for building AI agents, such as LangChain, AutoGen Familiarity with designing autonomous workflows using LLMs and external APIs Technical Skills: Programming: Advanced proficiency in Python, PyTorch, and TensorFlow, SciKit-Learn Data Engineering: Expertise in data cleaning, preprocessing, and handling large-scale datasets. Preferred experience with tools like AWS Glue, PySpark, and AWS S3. Cloud Technologies: Experience with AWS SageMaker, Google AI, Google Vertex AI, Databricks Strong SQL skills and advanced proficiency in statistical programming languages such as Python, along with experience using data manipulation libraries (e.g., Pandas, NumPy). Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

Are you a visionary who thrives on designing future-ready data ecosystems? Let’s build the next big thing together! We're working with top retail and healthcare leaders to transform how they harness data—and we’re looking for a Data Architect to guide that journey. We are looking for an experienced Data Architect with deep knowledge of Databricks and cloud-native data architecture. This role will drive the design and implementation of scalable, high-performance data platforms to support advanced analytics, business intelligence, and data science initiatives within a retail or healthcare environment. Key Responsibilities Define and implement enterprise-level data architecture strategies using Databricks. Design end-to-end data ecosystems including ingestion, transformation, storage, and access layers. Lead data governance, data quality, and security initiatives across the organization. Work with stakeholders to align data architecture with business goals and compliance requirements. Guide the engineering team on best practices in data modeling, pipeline development, and system optimization. Champion the use of Delta Lake, Lakehouse architecture, and real-time analytics. Required Qualifications 8+ years of experience in data architecture or solution architecture roles. Strong expertise in Databricks, Spark, Delta Lake, and data warehousing concepts. Solid understanding of modern data platform tools (Snowflake, Azure Synapse, BigQuery, etc.). Experience with cloud architecture (Azure preferred), data governance, and MDM. Strong understanding of healthcare or retail data workflows and regulatory requirements. Excellent communication and stakeholder management skills. Benefits Health Insurance, Accident Insurance. The salary will be determined based on several factors including, but not limited to, location, relevant education, qualifications, experience, technical skills, and business needs. Additional Responsibilities Participate in OP monthly team meetings, and participate in team-building efforts. Contribute to OP technical discussions, peer reviews, etc. Contribute content and collaborate via the OP-Wiki/Knowledge Base. Provide status reports to OP Account Management as requested. About Us OP is a technology consulting and solutions company, offering advisory and managed services, innovative platforms, and staffing solutions across a wide range of fields — including AI, cyber security, enterprise architecture, and beyond. Our most valuable asset is our people: dynamic, creative thinkers, who are passionate about doing quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies & and technologies, innovative training & education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Love turning raw data into powerful insights? Join us! We're partnering with global brands to unlock the full potential of their data. As a Data Engineer, you'll be at the heart of these transformations—building scalable data pipelines, optimizing data flows, and empowering analytics teams to make real-time, data-driven decisions. We are seeking a highly skilled Data Engineer with hands-on experience in Databricks to support data integration, pipeline development, and large-scale data processing for our retail or healthcare client. The ideal candidate will work closely with cross-functional teams to design robust data solutions that drive business intelligence and operational efficiency. Key Responsibilities Develop and maintain scalable data pipelines using Databricks and Spark. Build ETL/ELT workflows to support data ingestion, transformation, and validation. Collaborate with data scientists, analysts, and business stakeholders to gather data requirements. Optimize data processing workflows for performance and reliability. Manage structured and unstructured data across cloud-based data lakes and warehouses (e.g., Delta Lake, Snowflake, Azure Synapse). Ensure data quality and compliance with data governance standards. Required Qualifications 4+ years of experience as a Data Engineer. Strong expertise in Databricks, Apache Spark, and Delta Lake. Proficiency in Python, SQL, and data pipeline orchestration tools (e.g., Airflow, ADF). Experience with cloud platforms such as Azure, AWS, or GCP. Familiarity with data modeling, version control, and CI/CD practices. Experience in the retail or healthcare domain is a plus. Benefits Health Insurance, Accident Insurance. The salary will be determined based on several factors including, but not limited to, location, relevant education, qualifications, experience, technical skills, and business needs. Additional Responsibilities Participate in OP monthly team meetings, and participate in team-building efforts. Contribute to OP technical discussions, peer reviews, etc. Contribute content and collaborate via the OP-Wiki/Knowledge Base. Provide status reports to OP Account Management as requested. About Us OP is a technology consulting and solutions company, offering advisory and managed services, innovative platforms, and staffing solutions across a wide range of fields — including AI, cyber security, enterprise architecture, and beyond. Our most valuable asset is our people: dynamic, creative thinkers, who are passionate about doing quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies & and technologies, innovative training & education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology. Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

For an international project in Chennai, we are urgently looking for a Full Remote (Senior) Databricks Developer with +5 years of experience. We are looking for a motivated contractor. Candidates need to be fluent in English. Tasks and responsibilities: Collaborate with data architects and analysts to design robust data pipelines on the Databricks platform; Develop scalable and efficient ETL processes to ingest, transform, and store large volumes of data; Ensure data quality and integrity through the implementation of validation and cleansing processes. Optimize data pipelines for performance, scalability, and cost-effectiveness; Monitor and troubleshoot data pipeline issues to ensure seamless data flow and processing; Implement best practices for data storage, retrieval, and processing to enhance system performance; Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs; Document data pipeline designs, processes, and configurations for future reference and knowledge sharing; Provide technical guidance and support to team members and stakeholders on Databricks-related features; Profile: Bachelor or Master degree; +5 years of experience in Data Science roles; Azure Databricks for developing, managing, and optimizing big data solutions on the Azure platform; Programming skills in Python for writing data processing scripts and working with machine learning models; Advanced SQL skills for querying and manipulating data within Databricks and integrating with other Azure services; Azure Data Lake Storage (ADLS) for storing and accessing large volumes of structured and unstructured data and ensuring data reliability and consistency in Databricks; Power BI Integration for creating interactive data visualizations and dashboards; PowerApps Integration for building custom business applications that leverage big data insights; Data engineering, including ETL processes and data pipeline development; Azure DevOps for implementing CI/CD pipelines and managing code repositories; Machine Learning concepts and tools within Databricks for developing predictive models; Azure Synapse Analytics for integrating big data and data warehousing solutions; Azure Functions for creating serverless computing solutions that integrate with Databricks; Databricks REST API for automating tasks and integrating with other systems; Azure Active Directory for managing user access and security within Azure Databricks; Azure Blob Storage for storing and retrieving large amounts of unstructured data; Azure Monitor for tracking and analyzing the performance of Databricks applications; Have familiarity with data governance practices for ensuring compliance and data quality in big data projects; Fluent in English; Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

15 - 20 Lacs

Bengaluru

Remote

Naukri logo

Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau and Power BI to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs. Excellent communication skills with the ability to work across different teams and stakeholders. Desired Skills (Nice to Have): Familiarity with other BI tools or cloud platforms. Experience in healthcare data analysis or healthcare analytics.

Posted 1 week ago

Apply

8.0 - 12.0 years

25 - 40 Lacs

Chennai

Work from Office

Naukri logo

We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Senior Data Analyst Career Level: D1 Introduction to role Are you ready to lead the charge in data management excellence? As a Senior Data Analyst, you'll be instrumental in driving operational and technical proficiency for the US BBU. Your role is crucial in ensuring data accuracy and efficiency, supporting key business functions to achieve strategic goals. You'll bridge the gap between business collaborators and IT, translating sophisticated needs into actionable data solutions that enhance decision-making. Your analytical prowess will guide the development of innovative data products, influencing business strategy and fostering collaboration across teams. With a focus on leadership, you'll mentor a team of data professionals, encouraging continuous improvement and innovation. Are you prepared to deliver clear, actionable insights and drive business transformation? Accountabilities Provide operational and technical support for US BBU data management activities – data quality management, business process workflows, and data management needs for downstream applications and tools. Fix and triage operational issues related to data processing, business user queries, data investigation, and ad-hoc analytics. Perform data validation, reconciliation, and basic ad-hoc analyses to support business teams. Act as a liaison between Commercial/Medical collaborators and IT for customer concerns and issue resolution. Assist in handling access, user roles, and updates across platforms like Sharp. Essential Skills/Experience Quantitative bachelor’s degree from an accredited college or university is required in one of the following or related fields: Engineering, Operations Research, Management Science, Economics, Statistics, Applied Math, Computer Science or Data Science. An advanced degree is preferred (Master's, MBA or PhD). Proficient in PBI, PowerApps [development & fix], SQL, Python, Databricks, and AWS S3 operations. Strong understanding of data governance, privacy standards, and operational best practices. Excellent communication and influencing skills with consistent record to develop and efficiently. Experience working in a business support or operational data management environment. Organization and time management skills. Define and document detailed user stories, acceptance criteria, and non-functional requirements for the data products. Engage with cross-functional collaborators to understand their requirements, difficulties, and expectations. Advocate for a user-centric design approach, ensuring that the data products are intuitive, accessible, and meet the needs of the target users. Collaborate with the development team to plan and implement agile sprints, ensuring timely delivery of high-quality features. Supervise the data product ecosystem’s Business architecture, design, and development. Supervise industry trends and standard processes in data product development and management. Collaborate closely with business collaborators to understand their requirements and translate them into technical solutions. Supervise the end-to-end development lifecycle of the data products, from conceptualisation to deployment. Strong leadership and communication skills with demonstrated ability to work collaboratively with a significant number of business leaders and cross-functional business partners. Present succinct, compelling reviews of independently developed analyses infused with insight and business implications/actions to be considered. Strategic and critical thinking with the ability to engage, build and maintain credibility with Commercial Leadership Team. Strong organizational skills and time management; ability to handle diverse range of simultaneous projects. Desirable Skills/Experience Knowledge of AZ brand and Science. Experience of working with multiple 3rd party providers, including information technology partners. Strategic and critical thinking with the ability to engage, build and maintain credibility with Commercial Leadership Team. Understanding of US BBU commercial and medical business functions. Experience with Sharp [Internal AZ platform] administration, Power Apps development or troubleshooting. When we put unexpected teams in the same room, we ignite ambitious thinking with the power to encourage life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our outstanding and ambitious world. At AstraZeneca, you'll be part of a versatile distributed team that powers our enterprise to better serve patients every day. We demonstrate exciting new technology and digital innovations to accelerate our evolution. With an ambitious spirit that keeps us ahead of the rest, we apply creativity to every task we do. Our fast-paced environment grows with collaboration among bright minds who support each other while pushing forward. Here you'll find countless opportunities to build an outstanding reputation while being rewarded for your successes. Ready to make an impact? Apply now to join our dynamic team! Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Ciklum is looking for a Senior JavaScript Software Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About The Role As a Senior JavaScript Software Engineer, become a part of a cross-functional development team engineering experiences of tomorrow. Client for this project is a leading global provider of audit and assurance, consulting, financial advisory, risk advisory, tax, and related services. They are launching a digital transformation project to evaluate existing technology across the tax lifecycle and determine the best future state for that technology. This will include decomposing existing assets to determine functionality, assessment of those functionalities to determine the appropriate end state and building of new technologies to replace those functionalities. Responsibilities Participate in requirements analysis Collaborate with US and Vendors’ teams to produce software design and architecture Write clean, scalable code using Angular with Typescript, HTML, CSS, and NET programming languages Participate in pull request code review process Test and deploy applications and systems Revise, update, refactor and debug code Develop, support and maintain applications and technology solutions Ensure that all development efforts meet or exceed client expectations. Applications should meet requirements of scope, functionality, and time and adhere to all defined and agreed upon standards Become familiar with all development tools, testing tools, methodologies and processes Become familiar with the project management methodology and processes Encourage collaborative efforts and camaraderie with on-shore and off-shore team members Demonstrate a strong working understanding of the best industry standards in software development and version controlling Ensure the quality and low bug rates of code released into production Work on agile projects, participate in daily SCRUM calls and provide task updates During design and key development phases, we might need to work a staggered shift as applicable to ensure appropriate overlap with US teams and project deliveries Requirements We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you will be a good fit 6+ years of strong hands-on experience with JavaScript (ES6/ES2015+), HTML5, CSS3 2+ years with hands-on experience with Typescript 2+ years of hands-on experience with Angular 11+ component architecture, applying design patterns Experience with Angular 11+ and migrating to newer versions Experience with Angular State management or NgXs Experience with RxJS operators Hands on experience with Kendo UI or Angular material or SpreadJS libraries Experience with Nx – Nrwl/Nx library for monorepos Skill for writing reusable components, Angular services, directives and pipes Hands-on experience on C#, SQL Server, OOPS Concepts, Micro Services Architecture At least two-year hands-on experience on .NET Core, ASP.NET Core Web API, SQL, NoSQL, Entity Framework 6 or above, Azure, Database performance tuning, Applying Design Patterns, Agile .Net back-end development with data engineering expertise. Experience with MS Fabric as a data platform/ Snowflake or similar tools would be a plus, but not a must need Skill for writing reusable libraries Comfortable with Git & Git hooks using PowerShell, Terminal or a variation thereof Familiarity with agile development methodologies Excellent Communication skills both oral & written Excellent troubleshooting and communication skills, ability to communicate clearly with US counterparts Desirable Exposure to micro-frontend architecture Knowledge on Yarn, Webpack, Mongo DB, NPM, Azure Devops Build/Release configuration SignalR, ASP.NET Core and WebSockets This is an experienced level position, and we will train the qualified candidate in the required applications Willingness to work extra hours to meet deliverables Exposure to Application Insights & Adobe Analytics Understanding of cloud infrastructure design and implementation Experience in CI/CD configuration Good knowledge of data analysis in enterprises Experience with Databricks, Snowflake Exposure to Docker and its configurations, Experience with Kubernetes What's in it for you Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About Us At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram, Facebook, LinkedIn Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Can’t wait to see you at Ciklum. Apply Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Ciklum is looking for a Expert Angular Developer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About The Role As a Expert Angular Developer, become a part of a cross-functional development team engineering experiences of tomorrow. Client for this project is a leading global provider of audit and assurance, consulting, financial advisory, risk advisory, tax, and related services. They are launching a digital transformation project to evaluate existing technology across the tax lifecycle and determine the best future state for that technology. This will include decomposing existing assets to determine functionality, assessment of those functionalities to determine the appropriate end state and building of new technologies to replace those functionalities. Responsibilities Participate in requirements analysis Collaborate with US and Vendors’ teams to produce software design and architecture Write clean, scalable code using Angular with Typescript, HTML, and CSS Participate in pull request code review process Test and deploy applications and systems Revise, update, refactor and debug code Develop, support and maintain applications and technology solutions Ensure that all development efforts meet or exceed client expectations. Applications should meet requirements of scope, functionality, and time and adhere to all defined and agreed upon standards Become familiar with all development tools, testing tools, methodologies and processes Become familiar with the project management methodology and processes Encourage collaborative efforts and camaraderie with on-shore and off-shore team members Demonstrate a strong working understanding of the best industry standards in software development and version controlling Ensure the quality and low bug rates of code released into production Work on agile projects, participate in daily SCRUM calls and provide task updates During design and key development phases, we might need to work a staggered shift as applicable to ensure an appropriate overlap with US teams and project deliveries Requirements We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you will be a good fit 6+ years of strong hands-on experience with JavaScript (ES6/ES2015+), HTML5, CSS3 2+ years with hands-on experience with Typescript 2+ years of hands-on experience with Angular 11+ component architecture, applying design patterns Experience with Angular 11+ and migrating to newer versions Experience with Angular State management or NgXs Experience with RxJS operators Hands on experience with Kendo UI or Angular material or SpreadJS libraries Experience with Nx – Nrwl/Nx library for monorepos Skill for writing reusable components, Angular services, directives and pipes Comfortable with Git & Git hooks using PowerShell, Terminal or a variation thereof Familiarity with agile development methodologies Excellent Communication skills both oral & written Excellent troubleshooting and communication skills, ability to communicate clearly with US counterparts Desirable Exposure to micro-frontend architecture This is an experienced level position, and we will train the qualified candidate in the required applications Willingness to work extra hours to meet deliverables Exposure to .NET Core, ASP.NET Core Web API, SQL, NoSQL, Entity Framework 6 or above, Azure, Database performance tuning, Applying Design Patterns, Agile, Micro Services Architecture Knowledge on Yarn, Webpack, Mongo DB, NPM, Azure Devops Build/Release configuration SignalR, ASP.NET Core and WebSockets.Exposure to Application Insights & Adobe Analytics Understanding of cloud infrastructure design and implementation Experience in CI/CD configuration Good knowledge of data analysis in enterprises Experience with Databricks, Snowflake Exposure to Docker and its configurations, Experience with Kubernetes What's in it for you Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About Us At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram, Facebook, LinkedIn. Explore, empower, engineer with Ciklum! Experiences of tomorrow. Engineered together Interested already? We would love to get to know you! Submit your application. Can’t wait to see you at Ciklum. Apply Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

AI / Generative AI Engineer Location: Remote ( Pan India ) Job Type: Fulltime NOTE: "Only immediate joiners or candidates with a notice period of 15 days or less will be considered" Overview: We are seeking a highly skilled and motivated AI/Generative AI Engineer to join our innovative team. The ideal candidate will have a strong background in designing, developing, and deploying artificial intelligence and machine learning models, with a specific focus on cutting-edge Generative AI technologies. This role requires hands-on experience with one or more major cloud platforms (Google Cloud Platform - GCP, Amazon Web Services - AWS) and/or modern data platforms (Databricks, Snowflake) . You will be instrumental in building and scaling AI solutions that drive business value and transform user experiences. Key Responsibilities: Design and Development: Design, build, train, and deploy scalable and robust AI/ML models, including traditional machine learning algorithms and advanced Generative AI models (e.g., Large Language Models - LLMs, diffusion models). Develop and implement algorithms for tasks such as natural language processing (NLP) , text generation, image synthesis, speech recognition, and forecasting. Work extensively with LLMs, including fine-tuning, prompt engineering, retrieval-augmented generation (RAG) , and evaluating their performance. Develop and manage data pipelines for data ingestion, preprocessing, feature engineering, and model training, ensuring data quality and integrity. Platform Expertise: Leverage cloud AI/ML services on GCP (e.g., Vertex AI, AutoML, BigQuery ML, Model Garden, Gemini) , AWS (e.g., SageMaker, Bedrock, S3 ), Databricks, and/or Snowflake to build and deploy solutions. Architect and implement AI solutions ensuring scalability, reliability, security, and cost-effectiveness on the chosen platform(s). Optimize data storage, processing, and model serving components within the cloud or data platform ecosystem. MLOps and Productionization: Implement MLOps best practices for model versioning, continuous integration/continuous deployment (CI/CD), monitoring, and lifecycle management. Deploy models into production environments and ensure their performance, scalability, and reliability. Monitor and optimize the performance of AI models in production, addressing issues related to accuracy, speed, and resource utilization. Collaboration and Innovation: Collaborate closely with data scientists, software engineers, product managers, and business stakeholders to understand requirements, define solutions, and integrate AI capabilities into applications and workflows. Stay current with the latest advancements in AI, Generative AI, machine learning, and relevant cloud/data platform technologies. Lead and participate in the ideation and prototyping of new AI applications and systems. Ensure AI solutions adhere to ethical standards, responsible AI principles, and regulatory requirements, addressing issues like data privacy, bias, and fairness. Documentation and Communication: Create and maintain comprehensive technical documentation for AI models, systems, and processes. Effectively communicate complex AI concepts and results to both technical and non-technical audiences. Required Qualifications: 8+ years of experience with software development in one or more programming languages, and with data structures/algorithms/Data Architecture. 3+ years of experience with state of the art GenAI techniques (e.g., LLMs, Multi-Modal, Large Vision Models) or with GenAI-related concepts (language modeling, computer vision). 3+ years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging). Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related technical field. Proven experience as an AI Engineer, Machine Learning Engineer, or a similar role. Strong programming skills in Python. Familiarity with other languages like Java, Scala, or R is a plus. Solid understanding of machine learning algorithms (supervised, unsupervised, reinforcement learning), deep learning concepts (e.g., CNNs, RNNs, Transformer s), and statistical modeling. Hands-on experience with developing and deploying Generative AI models and techniques, including working with Large Language Models ( LLMs like GPT, BERT, LLaMA, etc .). Proficiency in using common A I/ML frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, Keras, Hugging Face Transformers, LangChain, etc. Demonstrable experience with at least one of the following cloud/data platforms: GCP: Experience with Vertex AI, BigQuery ML, Google Cloud Storage, and other GCP AI/ML services . AWS: Experience with SageMaker, Bedrock, S3, and other AWS AI/ML services . Databricks: Experience building and scaling AI/ML solutions on the Databricks Lakehouse Platform , including MLflow. Snowflake: Experience leveraging Snowflake for data warehousing, data engineering for AI/ML workloads , and Snowpark. Experience with data engineering, including data acquisition, cleaning, transformation, and building ETL/ELT pipelines. Knowledge of MLOps tools and practices for model deployment, monitoring, and management. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

***Immediate requirement*** Job Title: Data Bricks Expert (PySpark and Snowflake Mandatory) No. of years of experience: 6+ years Job Type: Contract Contract Duration: 6-12 months (potential to extend or convert to permanent) Location: India Work Type: Remote Salary Range: 15 - 17 LPA Start Date: Immediate (Notice period/joining within 1-2 weeks) **Apply only if you can join within 1-2 weeks** Job summary: We’re looking for a Databricks expert with strong hands-on experience in PySpark and Snowflake to design and optimize large-scale data pipelines. The ideal candidate should be proficient in developing ETL workflows within Databricks and integrating with Snowflake for data warehousing solutions. Experience with AWS services (like S3, Lambda, or Glue) is a plus. This role requires strong problem-solving skills and a deep understanding of data engineering best practices. Apply now! Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

12 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Exprence 5-8 Years Location - Bangalore Mode C2H Hands on data engineering experience. Hands on experience with Python programming Hands-on Experience with AWS & EKS Working knowledge of Unix, Databases, SQL Working Knowledge on Databricks Working Knowledge on Airflow and DBT

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Strategy and Transactions - SaT – DnA Associate Manager EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Associate Manager - Data Engineering. The main objective of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Your Key Responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas driven tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical leadership to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud ( MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills And Attributes For Success Minimum of 7 years of total experience with 3+ years in Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas ,data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [ Relational, NoSQL, Datawarehouse ] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory,Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modeling and integration experience with relational data sources, such as SQL Server databases ,Oracle/MySQL, Azure SQL and Azure Synapse Strong in PySpark, SparkSQL Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 6 to 8 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Experience in Snowflake What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Strategy and Transactions - SaT– DnA Manager EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Manager - Data Engineering. The main objective of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Your Key Responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas driven tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical leadership to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud ( MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills And Attributes For Success 9-11 years of total experience with 5+ years in Data warehousing/ Business Intelligence field Solid hands-on 5+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas, data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years’ experience in Azure database offerings [ Relational, NoSQL, Datawarehouse] 3+ years hands-on experience in various Azure services preferred – Azure Data Factory, Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics, Azure Analysis Services & Databricks Minimum of 5 years of hands-on database design, modeling and integration experience with relational data sources, such as SQL Server databases, Oracle/MySQL, Azure SQL and Azure Synapse Strong in PySpark, SparkSQL Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem-solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analysing large volumes of data Relevant work experience of minimum 9 to 11 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Experience with Snowflake What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary: We are seeking a skilled Data Engineer with PowerBI and Data Modeling expertise to maintain and enhance existing PowerBI dashboards, optimize the semantic data model, and support the migration of data models to a more abstract transformation layer. This role requires a strong understanding of PowerBI architecture, data modeling best practices, and performance optimization techniques to ensure reliable, scalable, and accurate data delivery to stakeholders. Key Responsibilities: Maintain and enhance existing PowerBI dashboards ensuring business continuity and data accuracy. Support the migration of current data models to an abstracted transformation layer (e.g., Azure Data Factory, SQL, Synapse, etc.). Optimize the current PowerBI semantic model for performance, scalability, and maintainability. Ensure efficient and secure connections between PowerBI and various data sources (SQL Server, Azure, etc.). Collaborate with business stakeholders, analysts, and developers to gather requirements and translate them into effective data models and visualizations. Identify and resolve data quality and performance issues. Implement best practices for PowerBI dataset management, including incremental refresh, dataflows, and reusable components. Document data models, transformations, and dashboard logic for internal use and knowledge sharing. Required Qualifications: 5+ years of experience in data engineering and BI development, with a strong focus on PowerBI. Expertise in PowerBI Desktop, Service, and the PowerBI ecosystem (Dataflows, DAX, Gateways, etc.). Deep understanding of semantic modeling concepts including star/snowflake schema, measures, calculated columns, relationships, etc. Experience with optimizing PowerBI reports for performance (query folding, aggregation tables, performance analyzer). Strong SQL skills and experience with ETL/ELT tools (e.g., Azure Data Factory, SSIS, or similar). Familiarity with cloud-based data platforms (Azure Synapse, Data Lake, Databricks, etc.). Excellent problem-solving and communication skills Show more Show less

Posted 1 week ago

Apply

125.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

JOB DESCRIPTION ————————————————————————————————————— Manager / DGM - Data Platform Godrej Consumer Products Limited (GCPL) Mumbai, Maharashtra, India ————————————————————————————————————— Job Title: Manager / DGM - Data Platform Job Type: Permanent, Full-time Function: Information Technology Business: Godrej Consumer Products Limited Location: Mumbai, Maharashtra, India About Godrej Industries Limited and Associate Companies (GILAC) GILAC is a holding company of the Godrej Group. We have significant interests in consumer goods, real estate, agriculture, chemicals, and financial services through our subsidiary and associate companies, across 18 countries. https://www.godrejindustries.com/ About Godrej Consumer Products Limited (GCPL) Godrej Consumer Products is a leading emerging markets company. As part of the over 125-year young Godrej Group, we are fortunate to have a proud legacy built on the strong values of trust, integrity and respect for others. At the same time, we are growing fast and have exciting, ambitious aspirations. https://www.godrejcp.com/ About the role This role holder will act as a Data Engineering Project Lead. The role holder is responsible for implementation and support for data engineering projects (primarily on Microsoft Azure platform) through our partner eco system for our businesses globally. The responsibility also includes evaluation and implementation of new features and products like Gen AI etc. of Azure data platform and driving standardization of Azure technology stack and data engineering and coding best practices for Azure projects. Key Responsibilities Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse. Developing data models and maintaining data architecture to support data analytics and business intelligence reporting. Ensuring data quality and consistency through data cleaning, transformation, and integration processes. Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance. Collaborating with data scientists, business analysts, and other stakeholders to understand data requirements and implement appropriate data solutions. Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information. Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks. Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. Keeping abreast of the latest Azure features and technologies to enhance data engineering processes and capabilities. Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards. Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging. Who are we looking for? Education: BE / B-Tech in Computer Science from a premier institute MBA is preferred Azure Cloud Data Engineering Certifications Experience: 10 years of overall exp and at least 5 years exp in Azure Data Engineering Skills: Azure Data Factory and Data Pipeline Orchestration Azure Databricks and Big Data Processing Azure Synapse Analytics and Data Warehousing Data Modeling and Database Design SQL and NoSQL Database Technologies Data Lake Storage and Management Power BI and Data Visualization [Optional] Machine Learning and AI Integration with Azure ML Python, Pyspark, PySQL [Spark] Programming Data Security and Compliance within Azure What’s in it for you? Be an equal parent Maternity support, including paid leave ahead of statutory guidelines, and flexible work options on return Paternity support, including paid leave New mothers can bring a caregiver and children under a year old, on work travel Adoption support; gender neutral and based on the primary caregiver, with paid leave options No place for discrimination at Godrej Gender-neutral anti-harassment policy Same sex partner benefits at par with married spouses Gender transition support We are selfish about your wellness Comprehensive health insurance plans, as well as accident coverage for you and your family, with top-up options Uncapped sick leave Mental wellness and self-care programmes, resources and counselling Celebrating wins, the Godrej Way Structured recognition platforms for individual, team and business-level achievements Performance-based earning opportunities https://www.godrejcareers.com/benefits/ An inclusive Godrej Before you go, there is something important we want to highlight. There is no place for discrimination at Godrej. Diversity is the philosophy of who we are as a company. And has been for over a century. It’s not just in our DNA and nice to do. Being more diverse - especially having our team members reflect the diversity of our businesses and communities - helps us innovate better and grow faster. We hope this resonates with you. We take pride in being an equal opportunities employer. We recognise merit and encourage diversity. We do not tolerate any form of discrimination on the basis of nationality, race, colour, religion, caste, gender identity or expression, sexual orientation, disability, age, or marital status and ensure equal opportunities for all our team members. If this sounds like a role for you, apply now! We look forward to meeting you. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bangalore Urban, Karnataka, India

Remote

Linkedin logo

About Airties At Airties we are on a mission to empower broadband operators to deliver a better-connected home experience for their subscribers. We have an exciting story to tell, and we want you to help us tell it! Airties offers broadband operators advanced Wi-Fi solutions and services to allow them bring an improved user experience for their subscribers. The Airties portfolio includes Smart Wi-Fi software, a cloud-based experience management platform with its companion app and data engine. Our company also offers expert, bespoke engineering and testing services. Globally, Airties is the most widely deployed provider of Smart Wi-Fi solutions to network service providers and our technologies are driving a better-connected user experience in more than 35 million homes. Introduction Airties is looking for a Field Application Engineer (FAE) in India. An FAE is the primary technical point of contact to Airties customers across APAC and Australia, accommodating different time zones. This is a multi-faceted role that supports product pre-sales, acceptance, launch and post deployment phases, requiring communication internally and externally at all levels, providing fast and high quality response to customers and being the customer advocate to internal teams. What you will do: Provide onsite installations, product trials and deployments, and other professional services to Airties customers. Work with customers in the field cooperating with Airties Sales, Project, Product and Engineering teams Support sales efforts by explaining current products and solutions, sending samples, conducting trials and proof-of-concepts Escalate product customization and localization needs of the customer to Product, Engineering, and Technical Support management After product release, take responsibility for recording, tracking and handling defects and technical feedback from the customer. Conduct first level root-cause analysis, issue replication and answer technical questions real time on site and escalate appropriately Provide timely and effective resolution to support requests based on internal and external service level agreements (SLA) Work closely with Engineering teams to investigate, assign, and resolve defects. Deploy software defect fixes at customer sites. · Document each customer issue/request using Airties ticket management system Provide ongoing, regular updates to customers to keep them apprised of progress toward problem resolution Respond to requests for technical information and assistance in a timely and professional manner Provide regular reports on field services and/or tests performed. · Travel to customer sites to rectify problems when/if necessary Work with the customer's staff to train and develop operations capability on Airties' products Support alpha and beta tests of new Airties products at customer sites Provide feedback to Sales, Product, and Engineering teams to improve Airties products Promote Airties products at customers and establish strong lasting customer relationships What you should ideally bring: Bachelor’s or higher degree in Network Engineering, EE or similar technical field is required 5+ years professional experience of which 2+ years of hands-on field experience in networking in a customer facing role delivering professional services is required Excellent communication, presentation and reporting skills in English is mandatory. This job requires extensive oral communication skills to deal with customer’s teams, and written communication skills to produce reports and technical notes to customers. Demonstration of oral and written English proficiency will be required during application Strong understanding of network protocols/standards: TCP, UDP, IP, Ethernet, Wi-Fi protocols, and IEEE 802.11 standards is mandatory Knowledge of network tools like Wireshark, tcpdump, Iperf, etc. · Shell/Phyton scripting knowledge is a plus General understanding of AWS and similar cloud technologies along with Tableau, Databricks and Grafana Linux and very comfortable with CLI in various environments Expert in remote access tools and applications: Telnet / SSH / SCP / TFTP / Serial Console Experience with broadband, IPTV, and streaming video technologies is big plus Experience with Customer Premise Equipment devices, residential gateways, set top boxes is required. Familiarity with CPE management software solutions is a plus Ability to travel within short notice is required. This position requires international travel up to 50% of work time. Airties has a zero tolerance to discrimination policy. In this regard, during the course of the evaluation of your job application and during all your employment relation, if any, all discriminatory factors such as race, sex, sexual orientation, social gender definitions/roles, color, national or social background, ethnicity, religion, age, disablement, political opinion or any status that is protected under law shall be totally disregarded. *By applying to this job opening, you agree, acknowledge and consent to the transfer of your personal data by Airties to outside of Turkey; in particular to its subsidiaries. *By applying to this job opening, you agree, acknowledge and consent to the transfer of your personal data by Airties to its headquarters established in Turkey. Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Urgently Hiring for Senior Azure Data Engineer Job Location- Bangalore Minimum exp - Total 7+yrs with min 4 years relevant exp Keywords Databricks, Pyspark, SCALA, SQL, Live / Streaming data, batch processing data Share CV siddhi.pandey@adecco.com OR Call 6366783349 Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Share CV siddhi.pandey@adecco.com OR Call 6366783349

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills - Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Adobe is seeking dedicated Product Analytics Experts to join our growing team in Noida. In this role, you will play key part in driving the success of Adobe's Document Cloud products by using your expertise to understand user behavior, identify growth opportunities, and help drive data-driven decisions. Responsibilities: Analyze large datasets to identify trends, patterns, and key performance indicators . Develop and maintain SQL queries to extract, transform, and load data from various sources, including Hadoop and cloud-based platforms like Databricks. Develop compelling data visualizations using Power BI and Tableau to communicate insights seamlessly to PMs/ Engineering and leadership. Conduct A/B testing and campaign analysis, using statistical methods to measure and evaluate the impact of product changes. Partner with cross-functional teams (product managers, engineers, marketers) to translate data into actionable insights and drive strategic decision-making. Independently own and manage projects from inception to completion, ensuring timely delivery and high-quality results. Effectively communicate analytical findings to stakeholders at all levels, both verbally and in writing. Qualifications: 8-12 years of relevant experience in solving deep analytical challenges within a product or data-driven environment. Strong proficiency in advanced SQL, with experience working with large-scale datasets. Expertise in data visualization tools such as Power BI and Tableau. Hands-on experience in A/B testing, campaign analysis, and statistical methodologies. Working knowledge of scripting languages like Python or R, with a foundational understanding of machine learning concepts. Experience with Adobe Analytics is a significant plus. Good communication, presentation, and interpersonal skills. A collaborative mindset with the ability to work effectively within cross-functional teams. Strong analytical and problem-solving skills with a passion for data-driven decision making. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies