Home
Jobs

3345 Databricks Jobs - Page 45

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

10 Lacs

Gurgaon

Remote

GlassDoor logo

Senior Data Engineer – Azure This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders; Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams. You Will: Design and build resilient and efficient data pipelines for batch and real-time streaming Architect and design data infrastructure on cloud using Infrastructure-as-Code tools. Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools. Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches. Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations. Lead a team of engineers to deliver impactful results at scale. Execute projects with an Agile mindset. Build software frameworks to solve data problems at scale. Technical Requirements: 7+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus. 3+ years' experience architecting solutions for developing data pipelines from structured, unstructured sources for batch and realtime workloads. Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required. Strong programming / scripting experience using SQL and python and Spark. Strong Data Modeling, Data lakehouse concepts. Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket. Experience with Agile development methods in data-oriented projects Other Requirements: Highly motivated self-starter and team player and demonstrated success in prior roles. Track record of success working through technical challenges within enterprise organizations Ability to prioritize deals, training, and initiatives through highly effective time management Excellent problem solving, analytical, presentation, and whiteboarding skills Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations Certifications on Azure Data Engineering and related technologies. "Remote postings are limited to candidates residing within the country specified in the posting location" About Rackspace Technology We are the multicloud solutions experts. We combine our expertise with the world’s leading technologies — across applications, data and security — to deliver end-to-end solutions. We have a proven record of advising customers based on their business challenges, designing solutions that scale, building and managing those solutions, and optimizing returns into the future. Named a best place to work, year after year according to Fortune, Forbes and Glassdoor, we attract and develop world-class talent. Join us on our mission to embrace technology, empower customers and deliver the future. More on Rackspace Technology Though we’re all different, Rackers thrive through our connection to a central goal: to be a valued member of a winning team on an inspiring mission. We bring our whole selves to work every day. And we embrace the notion that unique perspectives fuel innovation and enable us to best serve our customers and communities around the globe. We welcome you to apply today and want you to know that we are committed to offering equal employment opportunity without regard to age, color, disability, gender reassignment or identity or expression, genetic information, marital or civil partner status, pregnancy or maternity status, military or veteran status, nationality, ethnic or national origin, race, religion or belief, sexual orientation, or any legally protected characteristic. If you have a disability or special need that requires accommodation, please let us know.

Posted 1 week ago

Apply

3.0 - 5.0 years

10 - 14 Lacs

Gurugram, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Role & responsibilities : Design,develop, and maintain ETL workflows using Ab Initio. Manage and support critical data pipelines and data sets across complex,high-volume environments. Perform data analysis and troubleshoot issues across Teradata and Oracle data sources. Collaborate with DevOps for CI/CD pipeline integration using Jenkins, and manage deployments in Unix/Linux environments. Participate in Agile ceremonies including stand-ups, sprint planning, and roadmap discussions. Support cloud migration efforts, including potential adoption of Azure,Databricks, and PySparkbased solutions. Contribute to project documentation, metadata management (LDM, PDM), onboarding guides, and SOPs Preferred candidate profile 3 years of experience in data engineering, with proven expertise in ETL development and maintenance. Proficiency with Ab Initio tools (GDE, EME, Control Center). Strong SQL skills, particularly with Oracle or Teradata. Solid experience with Unix/Linux systems and scripting. Familiarity with CI/CD pipelines using Jenkins or similar tools. Strong communication skills and ability to collaborate with cross-functional teams.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Bhubaneshwar

On-site

GlassDoor logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

15.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

9.0 years

6 - 7 Lacs

Chennai

On-site

GlassDoor logo

Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects models tests snapshots and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users permissions job scheduling and Git integration Onboarding and enablement of DBT users on Dbt Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake BigQuery Redshift or Databricks Cloud Platforms Use AWS GCP or Azure for data storage eg S3 GCS compute and resource management Orchestration Tools Automate DBT runs using Airflow Prefect or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles secrets and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation use dbt docs and collaborate with data teams About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

12.0 years

2 - 4 Lacs

Noida

Remote

GlassDoor logo

Principal Engineering Manager- Data Engineering Noida, Uttar Pradesh, India Date posted Jun 10, 2025 Job number 1827114 Work site Up to 50% work from home Travel 0-25 % Role type People Manager Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group Customer Experience (CXP) and work on something highly strategic to Microsoft. The goal of CXP Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are hiring a passionate Principal SW Engineering Manager to lead a team of highly motivated and talented software developers building highly scalable data platforms and deliver services and experiences for empowering Microsoft’s customer, seller and partner ecosystem to be successful. This is a unique opportunity to use your leadership skills and experience in building core technologies that will directly affect the future of Microsoft on the cloud. In this position, you will be part of a fun-loving, diverse team that seeks challenges, loves learning and values teamwork. You will collaborate with team members and partners to build high-quality and innovative data platforms with full stack data solutions using latest technologies in a dynamic and agile environment and have opportunities to anticipate future technical needs of the team and provide technical leadership to keep raising the bar for our competition. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 12+ years of experience of building high scale enterprise Business Intelligence and data engineering solutions. 3+ years of management experience leading a high-performance engineering team. Proficient in designing and developing distributed systems on cloud platform. Must be able to plan work, and work to a plan adapting as necessary in a rapidly evolving environment. Experience using a variety of data stores, including data ETL/ELT, warehouses, RDBMS, in-memory caches, and document Databases. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Strong communication skills and proficiency with executive communications Demonstrated ability to effectively lead and operate in cross-functional global organization Preferred Qualifications: Prior experience as an engineering site leader is a strong plus. Proven success in recruiting and scaling engineering organizations effectively. Demonstrated ability to provide technical leadership to teams, with experience managing large-scale data engineering projects. Hands-on experience working with large data sets using tools such as SQL, Databricks, PySparkSQL, Synapse, Azure Data Factory, or similar technologies. Expertise in one or more of the following areas: AI and Machine Learning. Experience with Business Intelligence or data visualization tools, particularly Power BI, is highly beneficial. #BICJobs Responsibilities As a leader of the engineering team, you will be responsible for the following: Build and lead a world class data engineering team. Passionate about technology and obsessed about customer needs. Champion data-driven decisions for features identification, prioritization and delivery. Managing multiple projects, including timelines, customer interaction, feature tradeoffs, etc. Delivering on an ambitious product and services roadmap, including building new services on top of vast amount data collected by our batch and near real time data engines. Design and architect internet scale and reliable services. Leveraging machine learning(ML) models knowledge to select appropriate solutions for business objectives. Communicate effectively and build relationship with our partner teams and stakeholders. Help shape our long-term architecture and technology choices across the full client and services stack. Understand the talent needs of the team and help recruit new talent. Mentoring and growing other engineers to bring in efficiency and better productivity. Experiment with and recommend new technologies that simplify or improve the tech stack. Work to help build an inclusive working environment. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

14.0 years

4 - 8 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India Qualification : We are seeking a highly experienced and dynamic Technical Project Manager to lead and manage our service engagements. The candidate will possess a strong technical ground, exceptional project management skills, and a proven track record of successfully delivering large-scale IT projects. You will be responsible for leading cross-functional teams, managing client relationships, and ensuring projects are delivered on time, within budget, and to the highest quality standards. 14+ years of experience in the role of managing and implementation of high-end software products, combined with technical knowledge in Business Intelligence (BI) and Data Engineering domains 5+ years of exeperience in project management with strong leadership and team management skills Hands-on with project management tools (e.g., Jira, Rally, MS Project) and strong expertise in Agile methodologies (certifications such as SAFe, CSM, PMP or PMI-ACP is a plus) Well versed with tracking project performance using appropriate metrics, tools and processes to successfully meet short/long term goals Rich experience interacting with clients, translating business needs into technical requirements, and delivering customer-focused solutions Exceptional verbal and written communication skills, with the ability to present complex concepts to techincal / non-technical stakeholders alike Strong understanding of BI concepts (reporting, analytics, data warehousing, ETL) leveraging expertise in tools such as Tableau, Power BI, Looker, etc. Knowledge of data modeling, database design, and data governance principles Proficiency in Data Engineering technologies (e.g., SQL, Python, cloud-based data solutions/platforms like AWS Redshift, Google BigQuery, Azure Synapse, Snowflake, Databricks) is a plus Skills Required : SAP BO, MicroStrategy, OBIEE Tableau, Power BI Role : This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. Act as the primary point of contact for stakeholders and customers, gathering requirements, managing expectations, and delivering regular updates on project progress Manage and mentor cross-functional teams, fostering collaboration and ensuring high performance while meeting project milestones Drive Agile practices (e.g., Scrum, Kanban) to ensure iterative delivery, adaptability, and continuous improvement throughout the project lifecycle Identify, assess, and mitigate project risks, ensuring timely resolution of issues and adherence to quality standards. Maintain comprehensive project documentation, including status reports, roadmaps, and post-mortem analyses, to ensure transparency and accountability Define the project and delivery plan including defining scope, timelines, budgets, and deliverables for each assignment Capable of doing resource allocations as per the requirements for each assignment Experience : 14 to 18 years Job Reference Number : 12929

Posted 1 week ago

Apply

4.0 - 6.0 years

9 - 19 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

JOB DESCRIPTION: • Strong experience in Azure Datafactory,Databricks, Eventhub, Python,PySpark ,Azure Synapse and SQL • Azure Devops experience to deploy the ADF pipelines. • Knowledge/Experience with Azure cloud stack.

Posted 1 week ago

Apply

9.0 - 17.0 years

5 - 8 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Noida, Uttar Pradesh, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Pune, Maharashtra, India Qualification : Senior Cloud Engineer/Architect with experiance of Design and implement the overall cloud architecture for the data intelligent platform, ensuring scalability, reliability, and security Skills Required : Aws Devops, Databricks, Kubernetes, Role : Key Responsibilities: Designing and implementing the overall cloud architecture for the data intelligent platform, ensuring scalability, reliability, and security. Evaluating and selecting appropriate AWS services to meet the platform's requirements. Developing and maintaining reusable Infrastructure as Code (IaC) using Terraform to automate the provisioning and management of cloud resources. Implementing CI/CD pipelines for continuous integration and deployment of infrastructure changes. Ensuring that the platform adheres to security best practices and MMC compliance requirements, including data encryption, access controls, and monitoring. Collaborating with security teams to implement security measures and conduct regular audits. Integrating the marketplace with end services to facilitate service requests and provisioning. Should have experince of working on Databricks Platform. Experience : 9 to 17 years Job Reference Number : 13048

Posted 1 week ago

Apply

7.0 - 10.0 years

10 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India;Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Bangalore, Karnataka, India;Pune, Maharashtra, India Qualification : Hands-on experience working with SAS to Python conversions. Strong mathematics and statistics skills. Skilled in AI-specific utilities like ChatGPT, Hugging Face Transformers, etc. Ability to understand business requirements. Use case derivation and solution creation from structured/unstructured data Storytelling, Business Communication, and Documentation Programming Skills – SAS, Python, Scikit-Learn, TensorFlow, PyTorch, Keras Exploratory Data Analysis Machine Learning and Deep Learning Algorithms Model building, Hyperparameter tuning, and Model performance metrics MLOps, Data Pipeline, Data Engineering Statistics Knowledge (Probability Distributions, Hypothesis Testing) Time series modeling, Forecasting, Image/Video Analytics, and Natural Language Processing (NLP). ML services from Clouds such as AWS, GCP, Azure, and Databricks Optional - Databricks, Big Data -Basic knowledge of Spark, Hive Skills Required : Python, SAS, Machine Learning Role : Responsilbe for SAS to python code conversion. Acquire skills required for building Machine learning models and deploy them for production. Feature Engineering, EDA, Pipeline creation, Model training, and hyperparameter tuning with structured and unstructured data sets. skills Develop and deploy cloud-based applications, including LLM/GenAI, into production. Experience : 7 to 10 years Job Reference Number : 13070

Posted 1 week ago

Apply

15.0 years

0 Lacs

Indore

On-site

GlassDoor logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners. Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements. LiveRamp is looking for a Senior Software Development Engineer (SDE) to join our team focused on building scalable, high-performance backend systems that power our data collaboration clean room platform. In this role, you will design and implement distributed services that enable secure data processing across organizations. You’ll work with modern data infrastructure such as Apache Spark, Airflow, and cloud-native tools. You will help lead the development of reliable microservices that integrate with platforms like Snowflake, Databricks, and SingleStore. This is a hands-on role that requires deep technical expertise, strong system design skills, and a passion for solving complex large data challenges in a data collaborative environment. You Will Design, build, and maintain scalable backend systems that support high-performance data ingestion, partitioning, and constructing data pipelines. Plan and deliver key development milestones to improve system reliability, throughput, and maintainability. Write clean, efficient, and well-tested code in Java, Go, Scala, and Spark that can be deployed reliably in production environments. Collaborate cross-functionally with product, infrastructure, and internal teams to define requirements and implement solutions. Lead and participate in code reviews, design reviews, and incident response processes. Contribute to infrastructure improvements by leveraging tools such as Kubernetes, Terraform, and Helm. Design with privacy and security-first principles and contribute to platform-level decisions that prioritize fast iteration and customer-centric delivery. Mentor junior engineers and contribute to the technical growth of the team. Measure success by improved system performance metrics, reduced incident counts, increased team velocity, and milestone completion. Your Team Will Work on distributed data systems that process large-scale, heterogeneous datasets in near real-time. Build and enhance microservices that power complex models used in analytics and customer applications. Solve high-impact engineering problems related to performance optimization, fault tolerance, and cloud-native service orchestration. Partner closely with data engineering, infrastructure, and security teams to deliver end-to-end features and improvements. About You 5+ years of experience designing and implementing scalable backend systems and distributed services in production environments. Proficiency in Java or Go, and familiarity with Python or Scala. Experience with Apache Spark and Apache Airflow in a production setting. Hands-on experience with a major cloud provider (AWS, GCP, or Azure). Ability to evaluate and architect cloud solutions based on strengths and trade-offs. Familiarity with Kubernetes, Helm, and Terraform for cloud-native service deployment. Experience with one or more modern cloud data platforms such as Snowflake, Databricks, or SingleStore. Proven ability to mentor engineers and influence team technical direction. Demonstrated experience in designing and scaling distributed systems. Location: Work in the heart of Hyderabad. Preferred Skills Experience with modern data lakehouse or query engines such as Apache Iceberg, Trino, or Presto. Familiarity with CI/CD practices and observability tools for monitoring distributed systems. Exposure to large-scale data modeling. Benefits Flexible paid time off, paid holidays, options for working from home, and paid parental leave. Comprehensive Benefits Package: LiveRamp offers a comprehensive benefits package designed to help you be your best self in your personal and professional lives. Our benefits package offers medical, dental, vision, accident, life and disability, an employee assistance program, voluntary benefits as well as perks programs for your healthy lifestyle, career growth, and more. Your medical benefits extend to your dependents including parents. More About Us LiveRamp’s mission is to connect data in ways that matter, and doing so starts with our people. We know that inspired teams enlist people from a blend of backgrounds and experiences. And we know that individuals do their best when they not only bring their full selves to work but feel like they truly belong. Connecting LiveRampers to new ideas and one another is one of our guiding principles—one that informs how we hire, train, and grow our global team across nine countries and four continents. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp. To all recruitment agencies : LiveRamp does not accept agency resumes. Please do not forward resumes to our jobs alias, LiveRamp employees or any other company location. LiveRamp is not responsible for any fees related to unsolicited resumes. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Description Invent the future with us. Recognized by Fast Company’s 2023 100 Best Workplaces for Innovators List, Ampere is a semiconductor design company for a new era, leading the future of computing with an innovative approach to CPU design focused on high-performance, energy efficient, sustainable cloud computing. By providing a new level of predictable performance, efficiency, and sustainability Ampere is working with leading cloud suppliers and a growing partner ecosystem to deliver cloud instances, servers and embedded/edge products that can handle the compute demands of today and tomorrow. Join us at Ampere and work alongside a passionate and growing team — we’d love to have you apply! About The Role Ampere Computing’s Enterprise Data and AI Team is seeking a Data Engineer proficient in modern data tools within the Azure environment. In this highly collaborative role, you will design, develop, and maintain data pipelines and storage solutions that support our business objectives. This position offers an excellent opportunity to enhance your technical skills, work on impactful projects, and grow your career in data engineering within a supportive and innovative environment. What You’ll Achieve Data Pipeline Development: Design, develop, and maintain data pipelines using Azure technologies such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Data Modeling: Collaborate with senior engineers to create and optimize data models that support business intelligence and analytics requirements. Data Storage Solutions: Implement and manage data storage solutions using Azure Data Lake Storage (Gen 2) and Cosmos DB. Coding and Scripting: Write efficient and maintainable code in Python, Scala, or PySpark for data transformation and processing tasks. Collaboration: Work closely with cross-functional teams to understand data requirements and deliver robust data solutions. Continuous Learning: Stay updated with the latest Azure services and data engineering best practices to continuously enhance technical skills. Support and Maintenance: Provide ongoing support for existing data infrastructure, troubleshoot issues, and implement improvements as needed. Documentation: Document data processes, architecture, and workflows to ensure clarity and maintainability. About You Bachelor's degree in Computer Science, Information Technology, Engineering, Data Science, or a related field. 2+ years of experience in a data-related role. Proficiency with Azure data services (e.g., Databricks, Synapse Analytics, Data Factory, Data Lake Storage Gen2). Working knowledge of SQL and at least one programming language (e.g., Python, Scala, PySpark). Strong analytical and problem-solving skills with the ability to translate complex data into actionable insights. Excellent communication skills, with the ability to explain technical concepts to diverse audiences. Experience with data warehousing concepts, ETL processes, and version control systems (e.g., Git). Familiarity with Agile methodologies. What We’ll Offer At Ampere we believe in taking care of our employees and providing a competitive total rewards package that includes base pay, bonus (i.e., variable pay tied to internal company goals), long-term incentive, and comprehensive benefits. Benefits Highlights Include Premium medical, dental, vision insurance, parental benefits including creche reimbursement, as well as a retirement plan, so that you can feel secure in your health, financial future and child care during work. Generous paid time off policy so that you can embrace a healthy work-life balance Fully catered lunch in our office along with a variety of healthy snacks, energizing coffee or tea, and refreshing drinks to keep you fueled and focused throughout the day. And there is much more than compensation and benefits. At Ampere, we foster an inclusive culture that empowers our employees to do more and grow more. We are passionate about inventing industry leading cloud-native designs that contribute to a more sustainable future. We are excited to share more about our career opportunities with you through the interview process. Ampere is an inclusive and equal opportunity employer and welcomes applicants from all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, religion, age, veteran and/or military status, sex, sexual orientation, gender, gender identity, gender expression, physical or mental disability, or any other basis protected by federal, state or local law. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Design and execute test strategies and test cases for applications built on Azure Data Pipelines and Databricks Validate data transformations, movement, and integrity across the data pipeline Good hands-on skills with SQL Hands-on experience in any test automation frameworks like Selenium, Cucumber, or Cypress Collaborate closely with developers, product owners, and business analysts in Agile Scrum teams to deliver high-quality software Proficiency in using tools such as Jira, Confluence Perform functional, regression, integration, and performance testing Ensure timely identification, tracking, and resolution of software defects Nice To Have Experience with API testing using tools like Postman Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

New Delhi, Chennai, Bengaluru

Hybrid

Naukri logo

Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Payroll SAP Integration Suppor, SAP S4 integration support Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with cloud computing platforms. - Strong understanding of application development methodologies. - Familiarity with data integration and ETL processes. - Experience in programming languages such as Python or Scala. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

7.5 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Description: The ideal candidate would be adept at understanding customer's business challenges and define appropriate analytics approach to design solution Should be able to convert mathematical/ statistics-based research/ academic literature into sustainable data science solutions This is a hands-on role, will be required to manage day to day delivery activities in executing analytics projects by analysing large volume of data Should have familiarity on solutions in core functional areas related to Promotion Effectiveness, Digital Marketing, Customer Relationship Management (CRM), Campaign Management & Data Insights etc. Evolve the approach for the application of machine learning/deep learning to existing program and project disciplines He / She would also be responsible for creating Business & technical presentations, reports etc. to present the analysis findings to the end clients and for business development This role requires excellent communication skills Must Have: Hands on experience in exploratory data analysis, A/B testing, Campaign Measurement and model building (end to end), customer Analytics, Loyalty and Promotions1-2 years in hospitality industry Tools: Databricks, Python, SQL, Experience on AWS platforms/Azure Excellent communication skills, both oral and written.Functional/Domain Experience:Good exposure to hospitality industry and their datasets Relevant Experience: 5+years of hands-on experience in Data Science Qualifications Educational Criteria:Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1 institutions OrMBA from Tier 1 institutions – Preferred Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience: 8+ years Location: Knowledge City, Hyderabad Work Model: Hybrid Regular work hours No. of rounds: 1 internal technical round & client 2 rounds About You The GCP CloudOps Engineer is accountable for a continuous, repeatable, secure, and automated deployment, integration, and test solutions utilizing Infrastructure as Code (IaC) and DevSecOps techniques. 8+ years of hands-on experience in infrastructure design, implementation, and delivery 3+ years of hands-on experience with monitoring tools (Datadog, New Relic, or Splunk) 4+ years of hands-on experience with Container orchestration services, including Docker or Kubernetes, GKE. Experience with working across time zones and with different cultures. 5+ years of hands-on experience in Cloud technologies GCP is preferred. Maintain an outstanding level of documentation, including principles, standards, practices, and project plans. Having experience building a data warehouse using Databricks is a huge plus. Hands-on experience with IaC patterns and practices and related automation tools such as Terraform, Jenkins, Spinnaker, CircleCI, etc., built automation and tools using Python, Go, Java, or Ruby. Deep knowledge of CICD processes, tools, and platforms like GitHub workflows and Azure DevOps. Proactive collaborator and can work in cross-team initiatives with excellent written and verbal communication skills. Experience with automating long-term solutions to problems rather than applying a quick fix. Extensive knowledge of improving platform observability and implementing optimizations to monitoring and alerting tools. Experience measuring and modeling cost and performance metrics of cloud services and establishing a vision backed by data. Develop tools and CI/CD framework to make it easier for teams to build, configure, and deploy applications Contribute to Cloud strategy discussions and decisions on overall Cloud design and best approach for implementing Cloud solutions Follow and Develop standards and procedures for all aspects of a Digital Platform in the Cloud Identify system enhancements and automation opportunities for installing/maintaining digital platforms Adhere to best practices on Incident, Problem, and Change management Implementing automated procedures to handle issues and alerts proactively Experience with debugging applications and a deep understanding of deployment architectures. Pluses Databricks Experience with the Multicloud environment (GCP, AWS, Azure), GCP is the preferred cloud provider. Experience with GitHub and GitHub Actions Skills: ci,azure,cd,cicd processes,cloud,devsecops,infrastructure,monitoring tools (datadog, new relic, splunk),databricks,azure devops,terraform,automation,aws,ruby,java,github,python,infrastructure as code (iac),jenkins,gcp,container orchestration (docker, kubernetes, gke),spinnaker,github workflows,circleci,gcp cloud operations,devops,,iac,operations,go Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day to day project execution. Outcomes Interpret the application feature and component designs to develop the same in accordance with specifications. Code debug test document and communicate product component and feature development stages. Validate results with user representatives integrating and commissions the overall solution. Select and create appropriate technical options for development such as reusing improving or reconfiguration of existing components while creating own solutions for new contexts Optimises efficiency cost and quality. Influence and improve customer satisfaction Influence and improve employee engagement within the project teams Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues Percent of voluntary attrition On time completion of mandatory compliance trainings Code Outputs Expected: Code as per the design Define coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation Requirements test cases and results Configure Define and govern configuration management plan Ensure compliance from the team Test Review/Create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise software developers on design and development of features and components with deeper understanding of the business problem being addressed for the client Learn more about the customer domain and identify opportunities to provide value addition to customers Complete relevant domain certifications Manage Project Support Project Manager with inputs for the projects Manage delivery of modules Manage complex user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort and size estimation and plan resources for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for applications features business components and data models Interface With Customer Clarify requirements and provide guidance to Development Team Present design options to customers Conduct product demos Work closely with customer architects for finalizing design Manage Team Set FAST goals and provide feedback Understand aspirations of the team members and provide guidance opportunities etc Ensure team members are upskilled Ensure team is engaged in project Proactively identify attrition risks and work with BSE on retention measures Certifications Obtain relevant domain and technology certifications Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort resources required for developing / debugging features / components Perform and evaluate test in the customer or target environments Make quick decisions on technical/project related challenges Manage a team mentor and handle people related issues in team Have the ability to maintain high motivation levels and positive dynamics within the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback for team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers and answer customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning handling multiple tasks. Build confidence with customers by meeting the deliverables timely with a quality product. Estimate time and effort of resources required for developing / debugging features / components Knowledge Examples Appropriate software programs / modules Functional & technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Broad knowledge of customer domain and deep knowledge of sub domain where problem is solved Additional Comments Key Responsibilities: Design, build, and manage end-to-end ETL/ELT workflows using Azure Data Factory (ADF) to support supply chain data movement and transformation. Integrate data from multiple sources such as ERP systems, logistics platforms, warehouses, APIs, and third-party providers into Azure Data Lake or Synapse Analytics. Ensure high-performance, scalable, and secure data pipelines aligned with business and compliance requirements. Collaborate with business analysts, data architects, and supply chain SMEs to understand data needs and implement effective solutions. Write and optimize complex SQL queries, stored procedures, and data transformation logic. Monitor, troubleshoot, and optimize ADF pipelines for latency, throughput, and reliability. Support data validation, quality assurance, and governance processes. Document data flows, transformation logic, and technical processes. Work in Agile/Scrum delivery model to support iterative development and rapid delivery. ________________________________________ Required Skills & Experience: 9–10 years of experience in Data Engineering and ETL development, with at least 3–5 years in Azure Data Factory. Strong knowledge of Azure Data Lake, Azure SQL DB, Azure Synapse, Blob Storage, and Data Flows in ADF. Proficiency in SQL, T-SQL, and performance tuning of queries. Experience working with structured, semi-structured (JSON, XML), and unstructured data. Exposure to Supply Chain data sources like ERP (e.g., SAP, Oracle), TMS, WMS, or inventory/order management systems. Experience with Git, Azure DevOps, or other version control and CI/CD tools. Basic understanding of DataBricks, Python, or Spark is a plus. Familiarity with data quality, metadata management, and lineage tools. Bachelor's Degree in Computer Science, Engineering, or a related field. ________________________________________ Preferred Qualifications: Experience in Supply Chain Analytics or Operations. Knowledge of forecasting, inventory planning, procurement, logistics, or demand planning data flows. Certification in Microsoft Azure Data Engineer Associate is preferred. ________________________________________ Soft Skills: Strong problem-solving and analytical skills. Ability to communicate effectively with business and technical stakeholders. Experience working in Agile / Scrum teams. Proactive, self-motivated, and detail-oriented. Skills Azure Data Factory,Azure Data Lake,Blob Storage Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

In Norconsulting we are currently looking for a Azure Data Engineer to join us in India in a freelancer opportunity for a major Banking organization. Duration : long term Location : Chennai, Hybrid Rate : 110 USD/day (around 2400 USD per month) Type of assignment: Full-time (8h/day, Monday to Friday) Years of experience: 5 • Master degree or equivalent experience. Minimum of 5 years of experience in Data Science roles. • Relevant certifications in data engineering, cloud computing, or Databricks are a plus. • Proven track record of designing and implementing data pipelines on the Databricks platform. • Experience in collaborating with data scientists, analysts, and business stakeholders as well as development, operations, and security teams to deliver data-driven solutions. • Ability to work effectively in a team-oriented environment. • Strong communication skills to articulate technical concepts to non-technical stakeholders.. • Demonstrated ability to identify and resolve technical issues efficiently. • Innovative mindset with a focus on continuous improvement and automation. • Ability to adapt to new technologies and methodologies in a fast-paced environment. • The person must have the ability to work in a multicultural environment, and have excellent process, functional, communication, teamwork and interpersonal skills, and willingness to work in a team environment to support other technical staff as needed. The person should have high tolerance for ambiguity. WBGJP00012338 Show more Show less

Posted 1 week ago

Apply

7.5 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

Exploring Databricks Jobs in India

Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Career Path

In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect

Related Skills

In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools

Interview Questions

  • What is Databricks and how is it different from Apache Spark? (basic)
  • Explain the concept of lazy evaluation in Databricks. (medium)
  • How do you optimize performance in Databricks? (advanced)
  • What are the different cluster modes in Databricks? (basic)
  • How do you handle data skewness in Databricks? (medium)
  • Explain how you can schedule jobs in Databricks. (medium)
  • What is the significance of Delta Lake in Databricks? (advanced)
  • How do you handle schema evolution in Databricks? (medium)
  • What are the different file formats supported by Databricks for reading and writing data? (basic)
  • Explain the concept of checkpointing in Databricks. (medium)
  • How do you troubleshoot performance issues in Databricks? (advanced)
  • What are the key components of Databricks Runtime? (basic)
  • How can you secure your data in Databricks? (medium)
  • Explain the role of MLflow in Databricks. (advanced)
  • How do you handle streaming data in Databricks? (medium)
  • What is the difference between Databricks Community Edition and Databricks Workspace? (basic)
  • How do you set up monitoring and alerting in Databricks? (medium)
  • Explain the concept of Delta caching in Databricks. (advanced)
  • How do you handle schema enforcement in Databricks? (medium)
  • What are the common challenges faced in Databricks projects and how do you overcome them? (advanced)
  • How do you perform ETL operations in Databricks? (medium)
  • Explain the concept of MLflow Tracking in Databricks. (advanced)
  • How do you handle data lineage in Databricks? (medium)
  • What are the best practices for data governance in Databricks? (advanced)

Closing Remark

As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies