Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10 - 18 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
10 - 20 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
4 - 8 years
10 - 15 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides
Posted 4 months ago
5 - 10 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Hybrid
Walk-in drive for Azure Databricks at Chennai (Siruseri, Campus) on 10th May [Saturday] Hexaware Technologies hiring Azure Databricks consultant with 4 to 10 years of experience Mandatory Skills: Azure Databricks, Pyspark, SQL, Azure Datafactory interested resources, pls share your details to manojkumark2@hexaware.com Total Exp: Exp in Databricks & Datafactory: Exp in Pyspark: CCTC & ECTC: NP /LWD:
Posted 4 months ago
8 - 12 years
20 - 25 Lacs
Gandhinagar
Remote
Requirement : 8+ years of professional experience as a data engineer and 2+ years of professional experience as a senior data engineer Must have strong working experience in Python and its various data analysis packages Pandas / NumPy Must have strong understanding of prevalent cloud ecosystems and experience in one of the cloud platforms AWS / Azure / GCP . Must have strong working experience in one of the leading MPP Databases Snowflake / Amazon Redshift / Azure Synapse / Google Big Query Must have strong working experience in one of the leading data orchestration tools in cloud – Azure Data Factory / Amazon Glue / Apache Airflow Must have experience working with Agile methodologies, Test Driven Development, and implementing CI/CD pipelines using one of leading services – GITLab / Azure DevOps / Jenkins / AWS Code Pipeline / Google Cloud Build Must have Data Governance / Data Management / Data Quality project implementation experience Must have experience in big data processing using Spark Must have strong experience with SQL databases (SQL Server, Oracle, Postgres etc.) Must have stakeholder management experience and very good communication skills Must have working experience on end-to-end project delivery including requirement gathering, design, development, testing, deployment, and warranty support Must have working experience with various testing levels, such as, unit testing, integration testing and system testing Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures Nice to have Skills : Working experience in DataBricks notebooks and managing DataBricks clusters Experience in Data Modelling tool such as Erwin or ER Studio Experience in one of the data architectures, such as Data Mesh or Data Fabric Has handled real time data or near real time data Experience in one of the leading Reporting & analysis tools, such as Power BI, Qlik, Tableau or Amazon Quick Sight Working experience with API integration General insurance / banking / finance domain understanding
Posted 4 months ago
8.0 - 13.0 years
20 - 30 Lacs
hyderabad
Work from Office
Strong SQL programming (T-SQL, PL/SQL, or similar). Power BI – data modelling, DAX, Power Query (MLanguage), and report building. Experience in ETL tools (SSIS or equivalent). Azure Synapse / Data Factory / Power Platform
Posted Date not available
7.0 - 12.0 years
11 - 16 Lacs
bengaluru
Work from Office
Technical Expert BSI We are looking for Technical Expert to be part of our Business Solutions Integrations team in the Analytics, Data and Integration stream. Position Snapshot Location: Bengaluru Type of Contract: Permanent Analytics, Data and Integration Type of work: Hybrid Work Language: Fluent Business English The role The Integration Technical expert will be working in the Business Solution Integration team focused on the Product Engineering and Operations related to Data Integration, Digital integration, and Process Integration the products in the in-Business solution integration and the initiatives where these products are used. Will work together with the Product Manager and Product Owners, as well as various other counterparts in the evolution of the DI, PI, and Digital Products. Will work with architects for orchestrating the design of the integration solutions. Will also a ct as the first point of contact for project teams to manage demand and will help to drive the transition from engineering to sustain as per the BSI standards. Will work with Operations Managers and Sustain teams on the orchestration of the operations activities, proposing improvements for better performance of the platforms. What youll do Work with architects to understand and orchestrate the design choices between the different Data, Process and Digital Integration patterns for fulfilling the data needs. Translate the various requirements into the deliverables for the development and implementation of Process, Data and Digital Integration solutions, following up the requests for getting the work done. Design, develop, and implement integration solutions using ADF, LTRS, Data Integration , SAP PO, CPI, Logic Apps MuleSoft, and Confluent. Work with the Operations Managers and Sustain teams for orchestrating performance and operational issues. We offer you We offer more than just a job. We put people first and inspire you to become the best version of yourself. Great benefits including competitive salary and a comprehensive social benefits package. We have one of the most competitive pension plans on the market, as well as flexible remuneration with tax advantages: health insurance, restaurant card, mobility plan, etc . Personal and professional growth through ongoing training and constant career opportunities reflecting our conviction that people are our most important asset. Minimum qualifications: Minimum of 7 years industry experience in software delivery projects Experience in project and product management, agile methodologies and solution delivery at scale. Skilled and experienced Technical Integration Expert with experience various integration platforms and tools, including ADF, LTRS, Data Integration , SAP PO, CPI, Logic Apps, , MuleSoft, and Confluent. Ability to contribute to a high-performing, motivated workgroup by applying interpersonal and collaboration skills to achieve goals. Fluency in English with excellent oral and written communication skills. Experience in working with cultural diversity: respect for various cultures and understanding how to work with a variety of cultures in the most effective way. Bonus Points If You: Experience with the Azure platform (especially with Data Factory) Experience with Azure DevOps and with Service Now Experience with Power Apps and Power BI
Posted Date not available
12.0 - 17.0 years
18 - 25 Lacs
bengaluru
Work from Office
Boeing India has an immediate opening for Lead Java Full Stack Developer to lead and work with software development teams working in areas of Maintenance Engineering. This position is expected to provide Technical Leadership, Project Leadership and Software Development skills to the team. This position will also support acquisition of talent and build the team further, develop employees and provide technical guidance to achieve excellent technical and business outcomes; i.e., provide technical leadership and project management guidance to the team, and lead the implementation of technical approaches, processes and procedures to deliver sophisticated technical capabilities. This role will be based out of Bangalore , India. Position Responsibilities: The ideal candidate will have the following skills and execute on the respective responsibilities Understands and develops software solutions to meet end user requirements. Leads his or her team to deliver effective and timely results with first time quality. Ensures that application integrates with overall system architecture, utilizing standard IT lifecycle methodologies and tools. Leads and contributes to product design and architecture discussions. Follow policies and procedures and develop technical strategies, will help acquire resources, provide technical guidance to team members and lead process improvements. Develop and maintain relationships and partnerships with team members and peers to provide oversight and approval of technical approaches, products and processes. Work with multi-functional teams supporting work packages for multiple programs. The candidate shall provide periodic update on project progress, quality metrics, project summaries, and other related documents. Contribute to productivity improvement through use of Quality Management System and lean principles. This position supports initiatives of Boeing India organization related to engineering excellence, employee development, customer engagement etc. Employer will not sponsor applicants for employment visa status. Basic Qualifications (Required Skills/Experience): 12+ years of related work experience. Full Stack Developer (Java) in analysis, design, development, implementation and troubleshooting web based applications with expertise in front-end and back-end technologies. Mandatory expertise in XML / XML Schema Expertise in Azure (Kubernetes, Docker, AI Search, Data Factory) Very strong experience in Java (Spring, Junit, Hamcrest, MockMvc, Mockito) Skilled in Angular and Javascript (Jasmine, Karma, SASS, Typescript, Webpack, CSS) Candidate must be a self-starter with a positive attitude, high ethics, and a track record of working successfully under pressure in a time-constrained environment. Must be able to work collaboratively with multi-functional teams within Boeing and external partners. Develop and maintain relationships / partnerships with customers, collaborators, peers, and partners to develop collaborative plans and complete projects. Proactively seek information and direction to successfully complete the statement of work. Must be flexible, with a high tolerance for organizational complexity and ability to work with team members across different cultures and time-zones Develop and maintain relationships / partnerships with customers, team members, peers, and partners to develop collaborative plans and complete projects. Demonstrate strong written, verbal and interpersonal communication skills. Be fluent in written and spoken English, and have high degree of proficiency with MS Office tools to prepare comprehensive reports, presentations, proposals, and Statements of Work. Must have experience leading teams and have the ability to mentor and teach juniors and partners to accomplish project and departmental goals and objectives. Preferred Qualifications (Desired Skills/Experience ) : In addition to the basic qualifications above, the ideal candidate would also possess the following Expertise in Oxygen Web Author Strong experience with Alfresco CMS Experience in Activity BPM Prior experience in Aerospace domain Candidates with experience under 13 years may be considered if they demonstrate depth and breadth of skills, have relevant certifications and/or possess advanced degrees. Typical Education & Experience: Typically, 11 or more years related work experience or relevant military experience. Advanced degree (e.g. Bachelor, Master, etc.) preferred, but not required. Relocation: This position does offer relocation within INDIA.
Posted Date not available
6.0 - 10.0 years
6 - 11 Lacs
bengaluru
Work from Office
Job Summary As a Microsoft Fabric Data Engineer/Developer, you will play a vital role in designing, developing, and implementing robust and scalable data solutions within the Microsoft Fabric ecosystem. You will collaborate closely with data architects, business stakeholders, and cross-functional teams to transform raw data into actionable insights, driving informed decision-making across the organization. If you are passionate about data engineering, possess a strong technical background, and excel in collaborative environments, we invite you to join our growing data team. Career Level - IC2 Responsibilities Microsoft Fabric Development : Design, develop, and deploy end-to-end data solutions using various components of Microsoft Fabric, including Lakehouse, Data Warehouse, Data Factory, and Data Engineering. Implement and optimize data pipelines for ingestion, transformation, and curation of data from diverse sources (e.g., Azure Data Lake Storage Gen2, on-premises databases, APIs, third-party systems). Develop and optimize data models within Microsoft Fabric, ensuring adherence to best practices for performance, scalability, and data quality. Utilize Power BI for data visualization and reporting, ensuring seamless integration with Fabric data assets. Azure Data Services Integration : Demonstrate strong hands-on experience with core Microsoft Azure data services, including Azure Data Factory (for ETL/ELT orchestration), Azure Databricks (for advanced analytics and processing), and Azure Data Lake Storage Gen2. Integrate Microsoft Fabric solutions with existing Azure data services and other enterprise systems. Data Architecture & Governance : Contribute to the design and implementation of robust, scalable, and secure data architectures within the Microsoft Fabric platform. Implement data quality, validation, and reconciliation processes to ensure data integrity and accuracy. Apply data governance best practices, including security, access controls (e.g., role-based access control), and compliance within Fabric and Azure Purview. Documentation & Knowledge Sharing : Maintain comprehensive documentation for data architectures, pipelines, data models, and processes. Stay updated with the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices. Qualifications & Skills Mandatory : Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. 4-7 years of professional experience as a Data Engineer, Data Developer, or in a similar role. Hands-on experience with Microsoft Fabric, including its core components (Lakehouse, Data Warehouse, Data Factory, Data Engineering). Strong expertise in Microsoft Azure data services: Azure Data Factory (ADF) Azure Data Lake Storage Gen2 Proven experience in designing, developing, and maintaining scalable data pipelines. Solid understanding of data warehousing concepts, dimensional modeling, and data lakehouse architectures. Proficiency in SQL for data manipulation and querying. Experience with version control systems (e.g., Git, Azure Repos). Strong analytical and problem-solving skills with meticulous attention to detail. Excellent communication skills (written and verbal) and the ability to collaborate effectively with cross-functional teams. Good-to-Have : Certification in Microsoft Azure or Microsoft Fabric. Experience with cloud-based data platforms, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). Knowledge of data governance frameworks and best practices. Additional Notes Ideal to have some background knowledge around Finance / Investment Banking / Fixed Income / OCIO Business Self-Assessment Questions To help you determine if this role is a good fit, please consider the following questions: 1)Can you describe your experience with Microsoft Fabric and its core components, highlighting specific projects or accomplishments? 2)How do you ensure data quality, validation, and reconciliation in your data pipelines, and can you provide an example from a previous project? 3)Can you explain your approach to data governance, including security, access controls, and compliance, and how you've applied this in a previous role? 4)How do you stay up-to-date with the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices? 5)Can you provide an example of a complex data problem you've solved in the past, highlighting your analytical and problem-solving skills.
Posted Date not available
7.0 - 12.0 years
19 - 34 Lacs
bengaluru
Work from Office
Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering .
Posted Date not available
5.0 - 10.0 years
20 - 35 Lacs
pune, chennai, bengaluru
Hybrid
Location- PAN India Exp- 5 to 12 Years Key Responsibilities: 1. Design, implement, and maintain end-to-end ML pipelines for model training, evaluation, and deployment 2. Collaborate with data scientists and software engineers to operationalize ML models 3. Develop and maintain CI/CD pipelines for ML workflows 4. Implement monitoring and logging solutions for ML models 5. Optimize ML infrastructure for performance, scalability, and cost-efficiency 6. Ensure compliance with data privacy and security regulations Required Skills and Qualifications: 1. Strong programming skills in Python, with experience in ML frameworks 2. Expertise in containerization technologies (Docker) and orchestration platforms (Kubernetes) 3. Proficiency in cloud platform (AWS) and their ML-specific services 4. Experience with MLOps tools 5. Strong understanding of DevOps practices and tools (GitLab, Artifactory, Gitflow etc.) 6. Knowledge of data versioning and model versioning techniques 7. Experience with monitoring and observability tools (Prometheus, Grafana, ELK stack) 8. Knowledge of distributed training techniques 9. Experience with ML model serving frameworks (TensorFlow Serving, TorchServe) 10. Understanding of ML-specific testing and validation techniques
Posted Date not available
7.0 - 12.0 years
4 - 8 Lacs
bengaluru
Work from Office
AZURE : Databricks(mand), Data factory, data lake, ADB, ADF, ADLS, Spark , Synapse, Streaming, Synapse SQL pools, SQL coding, Synapse pipelines. Hands on experience in SQL and DB based validations. Hands on experience in ETL process. Coding experience in Scala/ Pyspark/Python
Posted Date not available
12.0 - 15.0 years
10 - 14 Lacs
bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Master Data Management & Architecture Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that solutions are effectively implemented across multiple teams, while maintaining a focus on quality and efficiency in application delivery. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with strategic goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Master Data Management & Architecture.- Strong understanding of data governance principles and practices.- Experience with data integration techniques and tools.- Ability to design and implement data models that support business processes.- Familiarity with data quality management and data lifecycle management. Additional Information:- The candidate should have minimum 12 years of experience in SAP Master Data Management & Architecture.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
4.0 - 8.0 years
10 - 15 Lacs
bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules. Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides.
Posted Date not available
4.0 - 8.0 years
10 - 15 Lacs
bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules. Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides.
Posted Date not available
5.0 - 10.0 years
22 - 27 Lacs
kochi
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks. Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc.
Posted Date not available
7.0 - 10.0 years
20 - 30 Lacs
noida
Hybrid
Description Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented and motivated Data Engineer to join our growing data team. You will play a key role in building scalable data pipelines, optimizing data infrastructure, and enabling data-driven solutions. Primary Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for batch and real-time data processing Build and optimize data models and data warehouses to support analytics and reporting Collaborate with analysts and software engineers to deliver high-quality data solutions Ensure data quality, integrity, and security across all systems Monitor and troubleshoot data pipelines and infrastructure for performance and reliability Contribute to internal tools and frameworks to improve data engineering workflows Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications Required Qualifications: 5+ years of experience working on commercially available software and / or healthcare platforms as a Data Engineer 3+ years of solid experience designing and building Enterprise Data solutions on cloud 1+ years of experience developing solutions hosted within public cloud providers such as Azure or AWS or private cloud/container-based systems using Kubernetes/OpenShift Experience with some of the modern relational databases Experience with Data warehousing services preferably Snowflake Experience in using modern software engineering and product development tools including Agile / SAFE, Continuous Integration, Continuous Delivery, DevOps etc. Solid experience of operating in a quickly changing environment and driving technological innovation to meet business requirements Skilled at optimizing SQL statements Subject matter expert on Cloud technologies preferably Azure and Big Data ecosystem Preferred Qualifications: Experience with real-time data streaming and event-driven architectures Experience building Big Data solutions on public cloud (Azure) Experience building data pipelines on Azure with skills Databricks spark, scala, Azure Data factory, Kafka and Kafka Streams, App services, Az Functions Experience developing RESTful Services in .NET, Java or any other language Experience with DevOps in Data engineering Experience with Microservices architecture Exposure to DevOps practices and infrastructure-as-code (e.g., Terraform, Docker) Knowledge of data governance and data lineage tools Ability to establish repeatable processes, best practices and implement version control software in a Cloud team environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted Date not available
6.0 - 11.0 years
6 - 16 Lacs
bengaluru
Hybrid
Data Engineer - Azure+Databricks Location: Bangalore | 3 day WFO Experience: 6+ Requirements: 5+ years of experience in data engineering roles. Hands-on experience with: Azure Data Factory (ADF) building pipelines, triggers, linked services. Azure Databricks building and managing Spark jobs in PySpark. Azure Synapse Analytics data warehousing, SQL queries, workspace orchestration. Apache Kafka – consuming and processing real-time data streams. Strong in SQL, Python, and Spark for data manipulation and transformation. Exposure to CI/CD practices (Azure DevOps, Git workflows, build/release pipelines). Understanding of data lake architecture and modern data warehousing principles.
Posted Date not available
11.0 - 21.0 years
35 - 80 Lacs
hyderabad
Hybrid
Job Title: Principal Data Engineer, Logistics Employment Type: Full Time Experience: 12+ Years About the Role: We are looking for a Principal Data Engineer to lead the design and delivery of scalable data solutions using Azure Data Factory and Azure Databricks. This is a consulting-focused role that requires strong technical expertise, stakeholder engagement, and architectural thinking. You will work closely with business, functional, and technical teams to define data strategies, design robust pipelines, and ensure smooth delivery in an Agile environment. Responsibilities Collaborate with business and technology stakeholders to gather and understand data needs Translate functional requirements into scalable and maintainable data architecture Design and implement robust data pipelines Lead data modeling, transformation, and performance optimization efforts Ensure data quality, validation, and consistency Participate in Agile ceremonies including sprint planning and backlog grooming Support CI/CD automation for data pipelines and integration workflows Mentor junior engineers and promote best practices in data engineering Must Have 12+ years of IT experience, with at least 5 years in data architecture roles in modern metadata driven and cloud-based technologies, bringing a software engineering mindset Strong analytical and problem-solving skills - Ability to determine data patterns and perform root cause analysis to resolve production issues Excellent communication skills, with experience in leading client-facing discussion Strong hands-on experience with Azure Data Factory and Databricks, leveraging custom solutioning and design beyond drag-and-drop capabilities for big data workloads Demonstrated proficiency in SQL, Python, and Spark Experience with CI/CD pipelines, version control and DevOps tools Experience with applying dimensional and Data Vault methodologies Background in working with Agile methodologies and sprint-based delivery Ability to produce clear and comprehensive technical documentation Nice to Have Experience with Azure Synapse and Power BI Experience with Microsoft Purview and/or Unity Catalog Understanding of Data Lakehouse and Data Mesh concepts Familiarity with enterprise data governance and quality frameworks Manufacturing experience within the operations domain
Posted Date not available
5.0 - 8.0 years
9 - 12 Lacs
pune
Hybrid
Job Overview: We are seeking a highly skilled and motivated Cloud Developer with strong expertise in Terraform and Python, and a solid understanding of Azure and AWS cloud platforms. The ideal candidate will possess hands-on experience in Azure infrastructure automation, a deep understanding of multi-cloud environments, and a strong background in infrastructure as code (IaC) and DevOps best practices. Key Responsibilities: • Design, implement, and manage cloud infrastructure using Terraform on Azure (and AWS where required). • Automate and orchestrate Azure infrastructure components with a focus on scalability, security, and cost optimization. • Leverage Azure Data Services such as Data Factory, Synapse, and Databricks for cloud data platform tasks. • Optimize and manage database workloads with SQL/PLSQL and query optimization techniques. • Implement and maintain CI/CD pipelines using tools such as Azure DevOps and GitHub Actions. • Manage and support multi-cloud environments, ensuring seamless operations and integration. • Troubleshoot infrastructure and application issues across cloud platforms with effective scripting and automation. • Drive adoption of IaC practices and contribute to continuous improvement of DevOps workflows. Required Skills & Qualifications: • Strong proficiency in Terraform and Python. • Proven experience with Azure infrastructure automation. • Solid knowledge of Azure and AWS cloud platforms. • Hands-on experience with Azure Data Services (Data Factory, Synapse, Databricks) preferred. • Proficiency in SQL/PLSQL, query optimization, and cloud database environments. • Familiarity with CI/CD tools preferably Azure DevOps and GitHub Actions. • Experience managing multi-cloud environments. • Strong scripting and troubleshooting skills. Experience 5 to 7 years & relevant experience 4+ years . • Demonstrated experience with Infrastructure as Code (IaC) and DevOps best practices. Preferred Qualifications: • Azure or AWS certifications (e.g., AZ-104, AZ-400, AWS Certified DevOps Engineer). • Experience working in Agile/Scrum environments. Exposure to monitoring and logging tools.
Posted Date not available
5.0 - 10.0 years
2 - 5 Lacs
hyderabad
Work from Office
About The Role Project Role : Quality Engineer (Tester) Project Role Description : Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process. Must have skills : Automated Testing Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a skilled QA Engineer with strong experience in automation testing using Pytest, ETL validation, and SQL for our modern data warehouse platform built on Azure and Databricks. The candidate will be responsible for ensuring data accuracy, quality, and stability across data pipelines and reports through automated and manual testing techniques.Key Responsibilities:Design and implement test automation frameworks using Pytest for validating ETL pipelines and data quality.Perform ETL testing, including source-to-target validation, transformation logic checks, and full-load/incremental load testing.Write and optimize complex SQL queries for data validation, reconciliation, and defect identification.Perform regression testing to ensure pipeline and platform stability with evolving changes.Conduct data reconciliation between source systems and the target data warehouse to ensure completeness and accuracy.Work closely with data engineers, product owners, and business users to understand requirements and translate them into test cases.Track and document defects, test results, and quality metrics in a structured and timely manner.Collaborate in an Agile/Scrum environment and participate in daily stand-ups, sprint planning, and retrospectives.Must-Have Skills: 5+ years of experience in Quality Assurance / Software Testing rolesMandatory Hands-on experience with Pytest or similar Python-based test automation frameworksStrong expertise in ETL testing and Data Warehouse testingProficient in writing and debugging complex SQL queriesFamiliarity with Azure Data Services and DatabricksStrong knowledge of data validation, reconciliation techniques, and regression testingFamiliarity with CI/CD tools and test automation integrationGood-to-Have Skills: Experience with tools like Data Factory, or Synapse AnalyticsExposure to data governance, lineage tools, or data catalogingUnderstanding of big data concepts and Delta LakeBasic understanding of scripting in Python Additional Information:- The candidate should have minimum 5 years of experience in Automated Testing.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
5.0 - 10.0 years
2 - 5 Lacs
hyderabad
Work from Office
About The Role Project Role : Quality Engineer (Tester) Project Role Description : Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process. Must have skills : Automated Testing Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for a skilled QA Engineer with strong experience in automation testing using Pytest, ETL validation, and SQL for our modern data warehouse platform built on Azure and Databricks. The candidate will be responsible for ensuring data accuracy, quality, and stability across data pipelines and reports through automated and manual testing techniques.Key Responsibilities:Design and implement test automation frameworks using Pytest for validating ETL pipelines and data quality.Perform ETL testing, including source-to-target validation, transformation logic checks, and full-load/incremental load testing.Write and optimize complex SQL queries for data validation, reconciliation, and defect identification.Perform regression testing to ensure pipeline and platform stability with evolving changes.Conduct data reconciliation between source systems and the target data warehouse to ensure completeness and accuracy.Work closely with data engineers, product owners, and business users to understand requirements and translate them into test cases.Track and document defects, test results, and quality metrics in a structured and timely manner.Collaborate in an Agile/Scrum environment and participate in daily stand-ups, sprint planning, and retrospectives.Must-Have Skills: 5+ years of experience in Quality Assurance / Software Testing rolesMandatory Hands-on experience with Pytest or similar Python-based test automation frameworksStrong expertise in ETL testing and Data Warehouse testingProficient in writing and debugging complex SQL queriesFamiliarity with Azure Data Services and DatabricksStrong knowledge of data validation, reconciliation techniques, and regression testingFamiliarity with CI/CD tools and test automation integrationGood-to-Have Skills: Experience with tools like DBT, Data Factory, or Synapse AnalyticsExposure to data governance, lineage tools, or data catalogingUnderstanding of big data concepts and Delta LakeBasic understanding of scripting in Python Additional Information:- The candidate should have minimum 5 years of experience in Automated Testing.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted Date not available
5.0 - 8.0 years
2 - 6 Lacs
bengaluru
Work from Office
Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined.
Posted Date not available
10.0 - 12.0 years
6 - 9 Lacs
chennai, bengaluru
Work from Office
Location: Bangalore, Chennai Extract data from source system using Data Factory pipelines Massaging and Cleansing the data Transform data based on business rules Expose the data for reporting needs and exchange data with downstream applications. Standardize the various integration flows (e.g decom ALDML Init integration, simplify ALDML Delta integration).
Posted Date not available
6.0 - 11.0 years
15 - 30 Lacs
kochi
Hybrid
Immediate to 30 days notice period candidates preferred Key Responsibilities: Design and implement general architecture for complex data systems. Translate business requirements into functional and technical specifications. Design and implement lakehouse architecture. Develop and manage cloud-based data architecture and reporting solutions. Apply data modelling principles for relational and dimensional data structures. Design Data Warehouses following established principles (e.g., Kimball, Inmon). Create and manage source-to-target mappings for ETL/ELT processes. Mentor junior engineers and contribute to architectural decisions and code reviews. Minimum Qualifications: Bachelors Degree in Computer Science, Computer Engineering, MIS, or related field. 5+ years of experience with Microsoft SQL Server and strong proficiency in T- SQL, SQL performance tuning (Indexing, Structure, Query Optimization). 5+ years of experience in Microsoft data platform development and implementation. 5+ years of experience with Power BI or other competitive technologies. 3+ years of experience in consulting, with a focus on analytics and data solutions. 2+ years of experience with Databricks, including Unity Catalog, Databricks SQL, Workflows, and Delta Sharing. Proficiency in Python and Apache Spark. Develop and manage Databricks notebooks for data transformation, exploration, and model deployment. Expertise in Microsoft Azure services, including Azure SQL, Azure Data Factory (ADF), Azure Data Warehouse (Synapse Analytics), Azure Data Lake, and Stream Analytics Preferred Qualifications: Experience with Microsoft Fabric. Familiarity with CI/CD pipelines and infrastructure-as-code tools like Terraform or Azure Resource Manager (ARM). Knowledge of taxonomies, metadata management, and master data management. Familiarity with data stewardship, ownership, and data quality management. Expertise in Big Data technologies and tools: o Big Data Platforms: HDFS, MapReduce, Pig, Hive. o General DBMS experience with Oracle, DB2, MySQL, etc. o NoSQL databases such as HBase, Cassandra, DataStax, MongoDB, CouchDB, etc. o Experience with non-Microsoft reporting and BI tools, such as Qlik, Cognos, MicroStrategy, Tableau, etc.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |