Jobs
Interviews

1432 Adf Jobs - Page 43

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 15.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

ECI is the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From its unmatched range of services, ECI provides stability, security and improved business performance, freeing clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in ECI. At ECI, we believe success is driven by passion and purpose. Our passion for technology is only surpassed by our commitment to empowering our employees around the world . The Opportunity: ECI has an exciting opportunity for an experienced Data Architect, who will work with our clients in building robust data centric applications. Client satisfaction is our primary objective; all available positions are customer facing requiring EXCELLENT communication and people skills. A positive attitude, rigorous work habits and professionalism in the work place are a must. Fluency in English, both written and verbal are required. This is an onsite role with work timings, 1 PM IST – 10 PM IST / 2 PM IST – 11 PM IST. What you will do: Design and develop data architecture for large enterprise application Should be able to build and demonstrate quick POC Review customer environment for master data processes and help with overall data solution & governance model Work closely with team business and IT stakeholders to understand master data requirements and current constraints Should be able to mentor technically to junior resources Should be able to set industry standards with his own work. Who you are: 12 to 15 years of experience as a Data Architect Hands on experience in full life cycle Master Data Management Hands of experience in ADF, Azure Purview, Databricks, Azure Fabric Services Lead Data architecture roadmaps, defined business cases and implementations for clients Experience in leading, evaluating and designing Data Architecture based on the overall Enterprise Data Strategy / Architecture Review customer environment for master data processes and help with overall data governance model Hands on experience in building cloud based later enterprise data warehouses. Experience in leading, evaluating and designing Data Architecture based on the overall Enterprise Data Strategy / Architecture Implementing best practices for data governance, data modeling, and data migrations Should be a good team player Bonus points if you have: Deep knowledge of Master Data Management (MDM) principles, processes, architectures, protocols, patterns, and technologies Strong knowledge of ETL and Data Modeling Deep knowledge of Master Data Management (MDM) principles, processes, architectures, protocols, patterns, and technologies ECI’s culture is all about connection - connection with our clients, our technology and most importantly with each other. In addition to working with an amazing team around the world, ECI also offers a competitive compensation package and so much more! If you believe you would be a great fit and are ready for your best job ever, we would like to hear from you! Love Your Job, Share Your Technology Passion, Create Your Future Here! Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mysore, Karnataka, India

On-site

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your Primary Responsibilities Include Proficient Software Development with Microsoft Technologies: Demonstrate expertise in software development using Microsoft technologies, ensuring high-quality code and efficient application performance. Collaborative Problem-Solving and Stakeholder Engagement: Collaborate effectively with stakeholders to understand product requirements and challenges, proactively addressing issues through analytical problem-solving and strategic software solutions. Agile Learning and Technology Integration: Stay updated with the latest Microsoft technologies, eagerly embracing continuous learning and integrating newfound knowledge to enhance software development processes and product features Preferred Education Master's Degree Required Technical And Professional Expertise SQL ADF Azure Data Bricks Preferred Technical And Professional Experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Rajkot, Gujarat, India

On-site

Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications.As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products.Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. BS or MS degree or equivalent experience relevant to functional area. 4 years of software engineering or related experience. Career Level - IC3 Responsibilities Fusion development team works on design, development and maintenance of Fusion Global HR, Talent, Configuration Workbench and Compensation product areas. As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Duties and tasks are varied and complex needing independent judgment. Fully competent in own area of expertise. May have project lead role and or supervise lower level personnel. Bachelors or Masters Degree (B.E./B.Tech./MCA/M.Tech./M.S.) from reputed universities. 1-8 years of experience in Applications or product development. Mandatory Skills Strong Knowledge of object oriented programming concepts Product design & development experience in [Java / J2EE technologies (JSP/Servlet)] OR [Database fundamentals, SQL, PL/SQL] Optional Skills Development experience on the Fusion Middleware platform Familiarity with ADF and Exposure to development in the cloud Development experience in Oracle Applications / HCM functionalityAnalyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job title: Senior Manager About The Role: As a Senior Manager, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 10-13years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 month ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Hyderabad

Hybrid

Key Responsibilities: Collaborate closely with stakeholders and cross-functional teams to understand business requirements and translate them into technical specifications for data warehouse development. Design, develop, and maintain scalable, sustainable ETL and SQL Server Data warehouse solutions for healthcare payor data management needs. Develop and optimize ETL processes using strong SQL skills and ADF pipelines. Port to ADF existing SSIS integrations. Ensure data quality and integrity through the implementation of data governance techniques addressing controls, monitoring, alerting/, validation checks, and error handling procedures. Participate in triaging production issues as well as rotational Production support. Lead developer teams when needed in cross-functional projects. Conduct Daily Scrums, Code Reviews and Production Turnover verifications when needed including bridging the gap between Offshore developers and Onshore leadership. Mentor and guide offshore team members, fostering a culture of collaboration and continuous learning. Qualifications: Bachelors degree in computer science, Information Systems, or a related field. 6+ years’ experience designing ETL solutions. Minimum of 5 years of professional experience in healthcare related areas with functional knowledge of healthcare business capabilities and data for Enrollments, Members, Authorizations, Claims, Provider functional areas. Prior Experience with Azure Data Factory (ADF) development, SSIS development and porting ETL implementations from SSIS to ADF. Strong proficiency in SQL (SQL Server, T-SQL) demonstrated experience in writing complex queries and optimizing database performance. Excellent communication and collaboration skills to work effectively with business stakeholders and cross-functional teams. Experience in facing Business teams in eliciting, developing and refining business requirements. Prior experience in Data modeling. Architecture/Design, including leading design and code reviews for ETL teams. Experienced in Azure DevOps, CI/CD and Release Management practices. Awareness of Best practices in Data Management, Data Governance and in Ensuring defect free production deployments. Job Benefits: Salary : Competitive and among the best in the industry Health Insurance : Comprehensive coverage for you and your family Flexible Timings : Work-life balance with adaptable schedules Team Lunches & Outings : Regular team bonding activities and celebrations Growth Opportunities : A supportive environment for learning and career advancement

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We’re hiring a Senior ML Engineer (MLOps) — 3-5 yrs Location: Chennai What you’ll do Tame data → pull, clean, and shape structured & unstructured data. Orchestrate pipelines → Airflow / Step Functions / ADF… your call. Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI. Scale → Spark / Databricks for the heavy lifting. Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Pair up → work with engineers, architects, and business folks to solve real problems, fast. What you bring 3+ yrs hands-on MLOps (4-5 yrs total software experience). Proven chops on one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark , Python, SQL, TensorFlow / PyTorch / Scikit-learn. You debug Kubernetes in your sleep and treat Dockerfiles like breathing. You prototype with open-source first, choose the right tool, then make it scale. Sharp mind, low ego, bias for action. Nice-to-haves Sagemaker, Azure ML, or Vertex AI in production. Love for clean code, clear docs, and crisp PRs. Why Datadivr? Domain focus: we live and breathe F&B — your work ships to plants, not just slides. Small team, big autonomy: no endless layers; you own what you build. 📬 How to apply Shoot your CV + a short note on a project you shipped to careers@datadivr.com or DM me here. We reply to every serious applicant. Know someone perfect? Please share — good people know good people. Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Thiruvananthapuram

On-site

3 - 5 Years 1 Opening Trivandrum Role description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes: Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Outputs Expected: Code: Develop code independently for the above Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution Domain relevance: Develop features and components with good understanding of the business problem being addressed for the client Manage Project: Manage module level activities Manage Defects: Perform defect RCA and mitigation Estimate: Estimate time effort resource dependence for one's own work and others' work including modules Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards/process Release: Execute release process Design: LLD for multiple components Mentoring: Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples: Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments: Resource needs to have sound technical knowhow on Azure Data bricks, has knowledge in SQL querying, and managing data bricks notebooks. Hands on experience on SQL using SQL constraints, operators, modifying and querying data from the table. Skills Azure Databricks,Adf,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 month ago

Apply

12.0 years

0 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Associate Director. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 12+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 1 month ago

Apply

3.0 years

6 - 8 Lacs

Chennai

Remote

Job Title: Azure Data Engineer Experience: 3+ years Location : Remote Job Description : 3+ years of experience as a Data Engineer with strong Azure expertise Proficiency in Azure Data Factory (ADF) and Azure Blob Storage Working knowledge of SQL and data modeling principles Experience working with REST APIs for data integration Hands-on experience with Snowflake data warehouse Exposure to GitHub and Azure DevOps for CI/CD and version control Understanding of DevOps concepts as applied to data workflows Azure certification (e.g., DP-203) is highly desirable Strong problem-solving and communication skills Speak with Employer: Mobile Number: 7418488223 Mail Id : ahalya.b@findq.in Job Types: Full-time, Permanent Benefits: Health insurance Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Remote

Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)

Posted 1 month ago

Apply

4.0 years

1 - 10 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About UHG United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms – United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Primary Responsibilities: Gather, analyze and document business requirements. At the same time leveraging knowledge of claims, clinical and other healthcare systems ETL jobs Development using Talend, Python, Cloud Based Data-Warehouse, Jenkins, Kafka and an orchestration tool Writing advanced SQL queries Create and interpret functional and technical specifications and design documents Understand the business and how various data elements and subject areas are utilized in order to develop and deliver the reports to business Be an SME either on Claims, member or provider module Provide regular status updates to higher management Design, develop, and implement scalable and high-performing data models and solutions using Snowflake and Oracle Manage and optimize data replication and ingestion processes using Oracle and Snowflake Develop and maintain ETL pipelines using Azure Data Factory (ADF) and Databricks Optimize query performance and reduce latency by leveraging pre-aggregated tables and efficient data processing techniques Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions Implement data security measures and ensure compliance with industry standards Automate data governance and security controls to maintain data integrity and compliance Develop and maintain comprehensive documentation for data architecture, data flows, ETL processes, and configurations Continuously optimize the performance of data pipelines and queries to improve efficiency and reduce costs Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors or 4 year university degree 5+ years of experience Experience in developing ETL jobs using Snowflake, ADF , Databricks and Python Experience in writing efficient and advanced SQL queries Experience in both producing and consuming data utilizing Kafka Experience working on large scale cloud-based data warehouse- Snowflake, Databricks Good experience in building data pipelines using ADF Knowledge of Agile methodologies, roles, responsibilities and deliverables Proficiency in Python for data processing and automation Demonstrated ability to learn and adapt to new data technologies Preferred Qualifications: Certified in Azure Data Engineering (AZ-205) Extensive experience with Azure cloud services (Azure Data Factory, Azure Databricks, Azure SQL Database, etc.) Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI/CD) Knowledge of SQL and NoSQL databases Proficiency in Python for data processing and automation Proven excellent time management, communication, decision making, and presentation skills Proven good problem-solving skills Proven good communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

5.0 years

6 - 22 Lacs

Noida

On-site

Job Title: ETL Lead – Azure Data Factory (ADF) Department : Data Engineering / Analytics Employment Type : Full-time Experience Level : 5+ years About the Role We are seeking an experienced ETL Lead with strong expertise in Azure Data Factory (ADF) to lead and oversee data ingestion and transformation projects across the organization. The role demands a mix of technical proficiency and leadership to design scalable data pipelines, manage a team of engineers, and collaborate with cross-functional stakeholders to ensure reliable data delivery. Key Responsibilities Lead the design, development, and implementation of end-to-end ETL pipelines using Azure Data Factory (ADF). Architect scalable ingestion solutions from multiple structured and unstructured sources (e.g., SQL Server, APIs, flat files, cloud storage). Define best practices for ADF pipeline orchestration, performance tuning, and cost optimization. Mentor, guide, and manage a team of ETL engineers—ensuring high-quality deliverables and adherence to project timelines. Work closely with business analysts, data modelers, and source system owners to understand and translate data requirements. Establish data quality checks, monitoring frameworks, and alerting mechanisms. Drive code reviews, CI/CD integration (using Azure DevOps), and documentation standards. Own delivery accountability across multiple ingestion and data integration workstreams. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or related discipline. 5+ years of hands-on ETL development experience, with 3+ years of experience in Azure Data Factory. Deep understanding of data ingestion, transformation, and warehousing best practices. Strong SQL skills and experience with cloud-native data storage (ADLS Gen2, Blob Storage). Proficiency in orchestrating complex data flows, parameterized pipelines, and incremental data loads. Experience in handling large-scale data migration or modernization projects. Preferred Skills Familiarity with modern data platforms like Azure Synapse, Snowflake, Databricks. Exposure to Azure DevOps pipelines for CI/CD of ADF pipelines and linked services. Understanding of data governance, security (RBAC), and compliance requirements. Experience leading Agile teams and sprint-based delivery models. Excellent communication, leadership, and stakeholder management skills. Job Type: Full-time Pay: ₹671,451.97 - ₹2,218,396.67 per year Benefits: Health insurance Schedule: Day shift Application Question(s): 5+ years of hands-on ETL development experience, with 3+ years of experience in Azure Data Factory. Experience: ETL: 5 years (Required) Azure: 3 years (Required) Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Code Outputs Expected: Develop code independently for the above Configure Implement and monitor configuration process Test Create and review unit test cases scenarios and execution Domain Relevance Develop features and components with good understanding of the business problem being addressed for the client Manage Project Manage module level activities Manage Defects Perform defect RCA and mitigation Estimate Estimate time effort resource dependence for one's own work and others' work including modules Document Create documentation for own work as well as perform peer review of documentation of others' work Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Status Reporting Report status of tasks assigned Comply with project related reporting standards/process Release Execute release process Design LLD for multiple components Mentoring Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments Resource needs to have sound technical knowhow on Azure Data bricks, has knowledge in SQL querying, and managing data bricks notebooks. Hands on experience on SQL using SQL constraints, operators, modifying and querying data from the table. Skills Azure Databricks,Adf,Sql Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: ETL Ingestion Engineer (Azure Data Factory) Department : Data Engineering / Analytics Employment Type : Full-time Experience Level : 2–5 years Count: ETL Ingestion Engr - 1 | Sr. ETL Ingestion Engr - 1 About the Role We are looking for a talented Data Engineer with hands-on experience in Azure Data Factory (ADF) to join our Data Engineering team. An individual will be responsible for building, orchestrating, and maintaining robust data ingestion pipelines from various source systems into our data lake or data warehouse environments. Key Responsibilities Design and implement scalable data ingestion pipelines using Azure Data Factory (ADF) . Extract data from a variety of sources such as SQL Server, flat files, APIs, and cloud storage. Develop ADF pipelines and data flows to support both batch and incremental loads. Ensure data quality, consistency, and reliability throughout the ETL process. Optimize ADF pipelines for performance, cost, and scalability. Monitor pipeline execution, troubleshoot failures, and ensure data availability meets SLAs. Document pipeline logic, source-target mappings, and operational procedures. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in ETL development and data pipeline implementation. Strong hands-on experience with Azure Data Factory (ADF) , including linked services, datasets, pipelines, and triggers. Proficiency in SQL and working with structured and semi-structured data (CSV, JSON, Parquet). Experience with Azure storage systems (ADLS Gen2, Blob Storage) and data movement. Familiarity with job monitoring and logging mechanisms in Azure. Preferred Skills Experience with Azure Data Lake, Synapse Analytics, or Databricks. Exposure to Azure DevOps for CI/CD in data pipelines. Understanding of data governance, lineage, and compliance requirements (GDPR, HIPAA, etc.). Knowledge of RESTful APIs and API-based ingestion. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Description Role Overview We are looking for a highly skilled Senior Data Engineer with strong hands-on experience in Azure Databricks, PySpark, and SQL , who can creatively design and develop high-performance data applications. The ideal candidate should be capable of optimizing solutions, mentoring team members, and collaborating with stakeholders to deliver quality software aligned with business goals. Roles & Responsibilities Technical Development Develop application components based on design specifications (HLD/LLD). Code, debug, test, and document the development lifecycle. Ensure adherence to coding standards and reuse best practices and design patterns. Optimize for performance, scalability, and cost efficiency. Testing & Quality Create and review unit test cases, test scenarios, and execution plans. Collaborate with QA teams to review test plans and provide technical clarifications. Conduct root cause analysis (RCA) for defects and implement mitigation strategies. Design & Architecture Contribute to the design and architecture of components/features (HLD, LLD, SAD). Participate in interface and data model design discussions. Evaluate and suggest technical options including reuse, reconfiguration, or new implementation. Project & Delivery Manage and deliver assigned modules/user stories within agile sprints. Provide effort estimates and assist in sprint planning. Execute and monitor the release and deployment process. Documentation & Configuration Create/review development checklists, design templates, and technical documentation. Define and enforce configuration management practices. Domain Understanding Understand business needs and translate them into technical requirements. Advise developers by applying domain knowledge to product features. Pursue domain certifications to enhance project contributions. Team & Stakeholder Management Set and review FAST goals for self and team members. Provide mentorship and address people-related issues in the team. Interface with customers to gather requirements, present solutions, and conduct demos. Maintain motivation and positive dynamics within the team. Knowledge Management Contribute to internal documentation repositories and client-specific knowledge bases. Review and promote reusable components and best practices. Must-Have Skills Azure Databricks – Hands-on experience with Spark-based analytics on Azure. PySpark – Proficient in developing and optimizing large-scale ETL pipelines. SQL – Strong command of writing, optimizing, and troubleshooting complex queries. Agile Methodology – Experience working in Scrum/Kanban environments. Software Development Life Cycle (SDLC) – Solid understanding from requirement to release. Experience in effort estimation, defect triage, and client communication. Good-to-Have Skills Azure Data Lake, Azure Data Factory, Synapse Analytics CI/CD pipelines using Azure DevOps or GitHub Actions Data modeling and Interface definition techniques Domain experience in healthcare, finance, or other data-heavy industries Experience with unit testing frameworks (e.g., PyTest) Familiarity with business intelligence tools (Power BI, Tableau) Certifications (Preferred) Microsoft Certified: Azure Data Engineer Associate Databricks Certified Developer – Apache Spark using Python Agile/Scrum Certification (CSM, PSM, etc.) Soft Skills Strong analytical and problem-solving abilities Ability to work under pressure and handle multiple tasks Excellent communication and client-facing skills Proactive attitude and team mentorship capabilities Ability to drive technical discussions and solution demos Skills Adb,Adf,Mongodb,Pyspark Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Years of exp- 8-10 years Role Proficiency Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments We are seeking an experienced Oracle SCM Cloud Functional / Technical Consultant with strong techno-functional expertise in Oracle EBS SCM modules. The ideal candidate will play a key role in bridging the gap between business users and technology, enabling smooth implementation, migration, and support of Oracle SCM applications (Cloud and/or EBS). Key Responsibilities: Analyze business requirements and translate them into functional and technical solutions in Oracle SCM Cloud and/or Oracle E-Business Suite (EBS). Configure and support Oracle SCM Cloud modules such as Procurement, Inventory, Order Management, Product Hub, Manufacturing, etc. Work on data migration, integrations, and extensions using Oracle tools (e.g., FBDI, ADF, REST/SOAP APIs, BI Publisher). Collaborate with cross-functional teams during implementations, upgrades, and support phases. Provide techno-functional support, including issue resolution, patch testing, and performance optimization. Develop technical components such as interfaces, reports, conversions, and customizations. Assist in UAT, training, and documentation for end-users. Stay up to date with Oracle SCM Cloud updates and EBS R12 enhancements. Required Skills & Experience: 5+ years of experience with Oracle EBS SCM modules (R12.1/12.2). At least 1–2 years of hands-on experience in Oracle SCM Cloud preferred. Strong knowledge of modules like PO, INV, OM, WMS, BOM, ASCP, iProcurement, etc. Proficiency in SQL, PL/SQL, Oracle Workflow, BI Publisher, Web ADI, and Oracle Forms/Reports. Experience with integration tools such as Oracle Integration Cloud (OIC) or middleware (SOA, Dell Boomi, MuleSoft) is a plus. Ability to conduct requirement gathering sessions, CRP, SIT, and UAT. Excellent problem-solving, communication, and interpersonal skills. Preferred Qualifications: Oracle Cloud SCM or Oracle EBS certifications. Experience with Agile / Waterfall methodologies. Exposure to global implementation/support environments. Familiarity with DevOps practices for Oracle deployments. Skills Oracle Scm,Oracle EBS Functional,Oracle Ebs Technical Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description DigitalXgeeks specializes in scaling business revenue and helping startups by optimizing all the departments through innovative digital solutions. The company's mission is to empower businesses by providing advanced technological support and expertise. DigitalXgeeks is committed to fostering growth and efficiency in a dynamic work environment. Role Description This is a Freelance/Part-time role for a Senior Data Engineer (Training) at DigitalXgeeks, with good experience on below skills Azure Data Engineer + Databricks Developer ➢ Python ➢ DataBricks ➢ Spark ➢ Pyspark ➢ Spark SQL ➢ Delta Lake ➢ Azure Data Factory ➢ Azure ADF & Databricks Projects ➢ Power BI Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Thane, Maharashtra

On-site

202503666 Thane, Maharashtra, India Bevorzugt Description Experience: 4+ years of experience Key Responsibilities Help define/improvise actionable and decision driving management information Ensure streamlining, consistency and standardization of MI within the handled domain Build and operate flexible processes/reports that meet changing business needs Would be required to prepare the detailed documentation of schemas Any other duties commensurate with position or level of responsibility Desired Profile Prior experience in insurance companies/insurance sector would be an added advantage Experience of Azure Technologies (include SSAS, SQL Server, Azure Data Lake, Synapse) Hands on experience on SQL and PowerBI. Excellent understanding in developing stored procedures, functions, views and T-SQL programs. Developed and maintained ETL (Data Extraction, Transformation and Loading) mappings using ADF to extract the data from multiple source systems. Analyze existing SQL queries for performance improvements. Excellent written and verbal communication skills Ability to create and design MI for the team Expected to handle multiple projects, with stringent timelines Good interpersonal skills Actively influences strategy by creative and unique ideas Exposure to documentation activities Key Competencies Technical Learning - Can learn new skills and knowledge, as per business requirement Action Orientated - Enjoys working; is action oriented and full of energy for the things he/she sees as challenging; not fearful of acting with a minimum of planning; seizes more opportunities than others. Decision Quality - Makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment. Detail Orientation : High attention to detail, especially in data quality, documentation, and reporting Communication & Collaboration : Strong interpersonal skills, effective in team discussions and stakeholder interactions Qualifications B.Com/ BE/BTech/MCA from reputed College/Institute

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Job Summary: We are looking for an experienced Database & Data Engineer who can own the full lifecycle of our cloud data systems—from database optimization to building scalable data pipelines. This hybrid role demands deep expertise in SQL performance tuning , cloud-native ETL/ELT , and modern Azure data engineering using tools like Azure Data Factory, Databricks, and PySpark . Ideal candidates will be comfortable working across Medallion architecture , transforming raw data into high-quality assets ready for analytics and machine learning. Key Responsibilities: 🔹 Database Engineering Implement and optimize indexing, partitioning, and sharding strategies to improve performance and scalability. Tune and refactor complex SQL queries, stored procedures, and triggers using execution plans and profiling tools. Perform database performance benchmarking, query profiling , and resource usage analysis. Address query bottlenecks, deadlocks, and concurrency issues using diagnostic tools and SQL optimization. Design and implement read/write splitting and horizontal/vertical sharding for distributed systems. Automate backup, restore, high availability, and disaster recovery using native Azure features. Maintain schema versioning and enable automated deployment via CI/CD pipelines and Git . 🔹 Data Engineering Build and orchestrate scalable data pipelines using Azure Data Factory (ADF), Databricks , and PySpark . Implement Medallion architecture with Bronze, Silver, and Gold layers in Azure Data Lake. Process and transform data using PySpark, Pandas, and NumPy . Create and manage data integrations from REST APIs , flat files, databases, and third-party systems. Develop and manage incremental loads , SCD Type 1 & 2 , and advanced data transformation workflows . Leverage Azure services like Synapse, Azure SQL DB, Azure Blob Storage , and Azure Data Lake Gen2 . Ensure data quality, consistency, and lineage across environments. 🔹 Collaboration & Governance Work with cross-functional teams including data science, BI, and business analysts. Maintain standards around data governance, privacy, and security compliance . Contribute to internal documentation and team knowledge base using tools like JIRA, Confluence, and SharePoint . Participate in Agile workflows and help define sprint deliverables for data engineering tasks. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of hands-on experience in data engineering and SQL performance optimization in cloud environments. Expertise in Azure Data Factory, Azure Data Lake, Azure SQL, Azure Synapse , and Databricks . Proficient in SQL, Python, PySpark, Pandas, and NumPy . Strong experience in query performance tuning, indexing, and partitioning . Familiar with PostgreSQL (PGSQL) and handling NoSQL databases like Cosmos DB or Elasticsearch . Experience with REST APIs , flat files, and real-time integrations. Working knowledge of version control (Git) and CI/CD practices in Azure DevOps or equivalent. Solid understanding of Medallion architecture , lakehouse concepts, and data reliability best practices. Preferred Qualifications: Microsoft Certified: Azure Data Engineer Associate or equivalent. Familiarity with Docker, Kubernetes , or other containerization tools. Exposure to streaming platforms such as Kafka, Azure Event Hubs, or Azure Stream Analytics. Industry experience in supply chain, logistics, or finance is a plus. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms - United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelors / 4 year university degree Experience - Between 5+ years of experience Development experience on Azure and Databricks Hands-on experience in developing ETL pipeline using Databricks and ADF(Azure Data Factory) Hands-on experience in Python, PySpark, Spark SQL Hands-on experience in on-premises environments to Azure Cloud Azure cloud exposure(Azure services like Virtual Machines, Load Balancer, SQL Database, Azure DNS, Blob Storage, AzureAD etc.) Good to have hands-on experience on Bigdata platform (Hadoop, Hive) and SQL scripting Good to have experience of Scala, Snowflake, Healthcare domain Good to have experience with CI/CD tools such as GitHub Action Ability to develop innovative approaches on performance optimization & automation Proven excellent verbal communication and presentation skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description Key Responsibilities Lead the design, development, and deployment of Oracle Fusion SaaS solutions, particularly in Supply Chain and Finance. Build and maintain integrations using Oracle Integration Cloud (OIC), REST/SOAP web services, and middleware tools. Customize and extend Fusion applications using BI Publisher, OTBI, FBDI, HDL, and ADF. Translate business requirements into technical specifications and detailed solution designs. Support the full development lifecycle including change management, documentation, testing, and deployment. Participate in formal design/code reviews and ensure adherence to coding standards. Collaborate with IT service providers to ensure quality, performance, and scalability of outsourced work. Provide Level 3 support for critical technical issues. Stay current with emerging Oracle technologies and contribute to continuous improvement initiatives. Responsibilities Qualifications Bachelor’s degree in Computer Science, Information Technology, Business, or a related field (or equivalent experience). Relevant certifications in Oracle Fusion or related technologies are a plus. Compliance with export controls or sanctions regulations may be required. Core Competencies Customer Focus – Builds strong relationships and delivers customer-centric solutions. Global Perspective – Applies a broad, global lens to problem-solving. Manages Complexity – Navigates complex information to solve problems effectively. Manages Conflict – Resolves conflicts constructively and efficiently. Optimizes Work Processes – Continuously improves processes for efficiency and effectiveness. Values Differences – Embraces diverse perspectives and cultures. Technical Competencies Solution Design & Configuration – Designs scalable, secure, and maintainable solutions. Solution Functional Fit Analysis – Evaluates how well components interact to meet business needs. Solution Modeling & Validation Testing – Creates models and tests solutions to ensure they meet requirements. Performance Tuning & Data Modeling – Optimizes application and database performance. Qualifications Skills & Technical Expertise Strong knowledge of Oracle SaaS architecture, data models, and PaaS extensions. Proficiency in Oracle Integration Cloud (OIC), REST/SOAP APIs. Experience with Oracle tools: BI Publisher, OTBI, FBDI, HDL, ADF. Ability to analyze and revise existing systems for improvements. Familiarity with SDLC, version control, and automation tools. Experience 5+ years of hands-on experience in Oracle Fusion SaaS development and technical implementation. Proven experience with Oracle Fusion Supply Chain and Finance modules. Intermediate level of relevant work experience (3–5 years minimum). Job Systems/Information Technology Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2415054 Relocation Package No Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description Key Responsibilities Lead the design, development, and deployment of Oracle Fusion SaaS solutions, particularly in Supply Chain and Finance. Build and maintain integrations using Oracle Integration Cloud (OIC), REST/SOAP web services, and middleware tools. Customize and extend Fusion applications using BI Publisher, OTBI, FBDI, HDL, and ADF. Translate business requirements into technical specifications and detailed solution designs. Support the full development lifecycle including change management, documentation, testing, and deployment. Participate in formal design/code reviews and ensure adherence to coding standards. Collaborate with IT service providers to ensure quality, performance, and scalability of outsourced work. Provide Level 3 support for critical technical issues. Stay current with emerging Oracle technologies and contribute to continuous improvement initiatives. Responsibilities Qualifications Bachelor’s degree in Computer Science, Information Technology, Business, or a related field (or equivalent experience). Relevant certifications in Oracle Fusion or related technologies are a plus. Compliance with export controls or sanctions regulations may be required. Core Competencies Customer Focus – Builds strong relationships and delivers customer-centric solutions. Global Perspective – Applies a broad, global lens to problem-solving. Manages Complexity – Navigates complex information to solve problems effectively. Manages Conflict – Resolves conflicts constructively and efficiently. Optimizes Work Processes – Continuously improves processes for efficiency and effectiveness. Values Differences – Embraces diverse perspectives and cultures. Technical Competencies Solution Design & Configuration – Designs scalable, secure, and maintainable solutions. Solution Functional Fit Analysis – Evaluates how well components interact to meet business needs. Solution Modeling & Validation Testing – Creates models and tests solutions to ensure they meet requirements. Performance Tuning & Data Modeling – Optimizes application and database performance. Qualifications Experience 5+ years of hands-on experience in Oracle Fusion SaaS development and technical implementation. Proven experience with Oracle Fusion Supply Chain and Finance modules. Intermediate level of relevant work experience (3–5 years minimum). Skills & Technical Expertise Strong knowledge of Oracle SaaS architecture, data models, and PaaS extensions. Proficiency in Oracle Integration Cloud (OIC), REST/SOAP APIs. Experience with Oracle tools: BI Publisher, OTBI, FBDI, HDL, ADF. Ability to analyze and revise existing systems for improvements. Familiarity with SDLC, version control, and automation tools. Job Systems/Information Technology Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2415084 Relocation Package No Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About UHG United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms - United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Primary Responsibilities Gather, analyze and document business requirements. At the same time leveraging knowledge of claims, clinical and other healthcare systems ETL jobs Development using Talend, Python, Cloud Based Data-Warehouse, Jenkins, Kafka and an orchestration tool Writing advanced SQL queries Create and interpret functional and technical specifications and design documents Understand the business and how various data elements and subject areas are utilized in order to develop and deliver the reports to business Be an SME either on Claims, member or provider module Provide regular status updates to higher management Design, develop, and implement scalable and high-performing data models and solutions using Snowflake and Oracle Manage and optimize data replication and ingestion processes using Oracle and Snowflake Develop and maintain ETL pipelines using Azure Data Factory (ADF) and Databricks Optimize query performance and reduce latency by leveraging pre-aggregated tables and efficient data processing techniques Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions Implement data security measures and ensure compliance with industry standards Automate data governance and security controls to maintain data integrity and compliance Develop and maintain comprehensive documentation for data architecture, data flows, ETL processes, and configurations Continuously optimize the performance of data pipelines and queries to improve efficiency and reduce costs Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelors or 4 year university degree 5+ years of experience Experience in developing ETL jobs using Snowflake, ADF , Databricks and Python Experience in writing efficient and advanced SQL queries Experience in both producing and consuming data utilizing Kafka Experience working on large scale cloud-based data warehouse- Snowflake, Databricks Good experience in building data pipelines using ADF Knowledge of Agile methodologies, roles, responsibilities and deliverables Proficiency in Python for data processing and automation Demonstrated ability to learn and adapt to new data technologies Preferred Qualifications Certified in Azure Data Engineering (AZ-205) Extensive experience with Azure cloud services (Azure Data Factory, Azure Databricks, Azure SQL Database, etc.) Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI/CD) Knowledge of SQL and NoSQL databases Proficiency in Python for data processing and automation Proven excellent time management, communication, decision making, and presentation skills Proven good problem-solving skills Proven good communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies