Home
Jobs

797 Adf Jobs - Page 22

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

On-site

Linkedin logo

Location - Bangalore/ Gurgaon Key Responsibilities: Lead the implementation and optimization of Microsoft Purview across the client’s data estate in MS Fabric/Azure Cloud Platform (ADF or Data Bricks etc). Define and enforce data governance policies, data classification, sensitivity labeling, and data lineage to ensure readiness for GenAI use cases. Collaborate with data engineers, architects, and AI/ML teams to ensure data discoverability, compliance, and ethical AI readiness. Design and implement data cataloging strategies to support GenAI model training and inference. Provide guidance on data access controls, privacy, and regulatory compliance (e.g., GDPR, HIPAA). Conduct workshops and training sessions for client stakeholders on Purview capabilities and best practices. Monitor and report on data governance KPIs and GenAI readiness metrics. Required Skills & Qualifications: Proven experience as a Microsoft Purview SME in enterprise environments. Strong knowledge of Microsoft Fabric, OneLake, and Synapse Data Engineering. Experience with data governance frameworks and metadata management. Hands-on experience with data classification, sensitivity labels, and data lineage tracking. Understanding of compliance standards and data privacy regulations. Excellent communication and stakeholder management skills. Preferred Qualifications: Microsoft certifications in Azure Data, Purview, or Security & Compliance. Experience working with Azure OpenAI, Copilot integrations, or other GenAI platforms. Background in data science, AI ethics, or ML operations is a plus. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Position Title : Manager-Data Science Location : Remote (Hybrid option available, if in Chennai) Company : ADF Data Science Pvt. Ltd. : Analytics, Risk and R&D Position Type : Full-Time Job Summary We are seeking skilled and motivated Data Scientists with 4+ years of experience in data science with good domain understanding. The ideal candidate will have a strong foundation in data science concepts, proficiency in analytical tools, and the ability to translate data insights into actionable business recommendations. This role requires a blend of technical expertise and business acumen, preferable in financial (credit, risk) fields, to drive data-driven decision making. This will be an individual contributor role or lead for a small team (if relevant experience is present) Qualifications Education : Bachelor of Engineering or master's in quantitative areas. It is mandatory that the ideal candidate should be from tier 1 institutes. Experience 4+ years of experience in data science and business analytics projects. The ideal candidate should exposure in Credit risk analytics. Proven experience in data handling, analytics with good exposure to statistical analysis and machine learning. Technical Skills Expertise in programming languages such as Python and SQL. Expertise in machine learning algorithms. Soft Skills Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to lead a team. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

5.0 - 8.0 years

10 - 18 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

5-8 years of strong experience of working on Microsoft Azure based technologies and frameworks. Good experience in design and implementation of Azure-based solutions using .Net Core, C#, Function Apps, Azure Service Bus and Azure SQL Server, SSIS Packages, among other Azure-related technologies. Experience designing and implementing in REST+JSON Web Services. Expertise in the application of software design patterns, object-oriented practices, and software development life cycle, testing, version control, deployment, production support and maintenance Expertise in relational and non-relational database concepts, design and database management systems Ability capture, document, and implement functional and non-functional requirements into technical solutions. Experience of working in Agile driven development model Ensure quality of deliverables within project timelines Independently manage daily client communication, especially over calls Drives the work towards completion with accuracy and timely deliverables Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Good to have Financial Services knowledge

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Summary: We are looking for a Data Engineer with solid hands-on experience in Azure-based data pipelines and Snowflake to help build and scale data ingestion, transformation, and integration processes in a cloud-native environment. Key Responsibilities: Develop and maintain data pipelines using ADF, Snowflake, and Azure Storage Perform data integration from various sources including APIs, flat files, and databases Write clean, optimized SQL and support data modeling efforts in Snowflake Monitor and troubleshoot pipeline issues and data quality concerns Contribute to documentation and promote best practices across the team Qualifications: 3–5 years of experience in data engineering or related role Strong hands-on knowledge of Snowflake, Azure Data Factory, SQL, and Azure Data Lake Proficient in scripting (Python preferred) for data manipulation and automation Understanding of data warehousing concepts and ETL/ELT patterns Experience with Git, JIRA, and agile delivery environments is a plus Strong attention to detail and eagerness to learn in a collaborative team setting Show more Show less

Posted 2 weeks ago

Apply

4.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Experience Level: 4 to 5 years Immediate joiner can apply We are looking for a skilled Data Engineer with expertise in Azure services to design, develop, and maintain scalable data pipelines and integrations across structured and unstructured data sources. The ideal candidate should have experience in building solutions using Azure Data Factory, Logic Apps, and Functions with strong focus on governance, monitoring, and automation. Responsibilities include: Building and managing pipelines using Azure Data Factory , automating workflows via Logic Apps , and implementing custom logic with Azure Functions (Python/C#) . Setting up API integrations and managing secrets via Key Vault , implementing access control with Entra ID , and cataloging with Azure Purview . Performing thorough unit and integration testing , ensuring data integrity and pipeline optimization with Azure Monitor . Required Skills: Strong understanding of Azure Integration services (ADF, Functions, Event Hubs) . Hands-on experience with ETL/ELT pipelines , REST APIs , and JSON/XML processing . Knowledge of Azure governance and monitoring tools , including Purview and Azure Monitor. Proficiency in Python or C# , with working knowledge of SQL. Show more Show less

Posted 2 weeks ago

Apply

40.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Job Description: Have hands-on experience in supporting/integrating/Development (example : XML publisher, BI publisher enterprise ,Oracle designer, SQL & PLSQL, JDev , Java , ADF , HTML and CSS with JavaScript ) and extending Oracle Cloud (Financials, Distribution, Manufacturing, HCM) Have experience (Understanding of Data Model and Business process functionality and its data flow) in Oracle Fusion and Oracle On-Premise Applications (Finance or Supply chain) Developing integrations using OIC, VBCS, Rest APIs/Web Services Experts in PaaS such as Oracle Analytics Cloud Services, Visual Builder Cloud Services, Oracle Integration Services Career Level - IC4 Responsibilities As an Advisory Systems Engineer, you are expected to be an expert member of the problem-solving/avoidance team and be highly skilled in solving extremely complex (often previously unknown), critical customer issues. Performing the assigned duties with a high level of autonomy and reporting to management on customer status and technical matters on a regular basis, you will be expected to work with very limited guidance from management. Further, the Advisory Systems Engineer is sought by customers and Oracle employees to provide expert technical advice. Job Description: Have hands-on experience in supporting/integrating/Development (example : XML publisher, BI publisher enterprise ,Oracle designer, SQL & PLSQL, JDev , Java , ADF , HTML and CSS with JavaScript ) and extending Oracle Cloud (Financials, Distribution, Manufacturing, HCM) Have experience (Understanding of Data Model and Business process functionality and its data flow) in Oracle Fusion and Oracle On-Premise Applications (Finance or Supply chain) Developing integrations using OIC, VBCS, Rest APIs/Web Services Experts in PaaS such as Oracle Analytics Cloud Services, Visual Builder Cloud Services, Oracle Integration Services Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 weeks ago

Apply

40.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Job Description: Have hands-on experience in supporting/integrating/Development (example : XML publisher, BI publisher enterprise ,Oracle designer, SQL & PLSQL, JDev , Java , ADF , HTML and CSS with JavaScript ) and extending Oracle Cloud (Financials, Distribution, Manufacturing, HCM) Have experience (Understanding of Data Model and Business process functionality and its data flow) in Oracle Fusion and Oracle On-Premise Applications (Finance or Supply chain) Developing integrations using OIC, VBCS, Rest APIs/Web Services Experts in PaaS such as Oracle Analytics Cloud Services, Visual Builder Cloud Services, Oracle Integration Services Career Level - IC4 Responsibilities As an Advisory Systems Engineer, you are expected to be an expert member of the problem-solving/avoidance team and be highly skilled in solving extremely complex (often previously unknown), critical customer issues. Performing the assigned duties with a high level of autonomy and reporting to management on customer status and technical matters on a regular basis, you will be expected to work with very limited guidance from management. Further, the Advisory Systems Engineer is sought by customers and Oracle employees to provide expert technical advice. Job Description: Have hands-on experience in supporting/integrating/Development (example : XML publisher, BI publisher enterprise ,Oracle designer, SQL & PLSQL, JDev , Java , ADF , HTML and CSS with JavaScript ) and extending Oracle Cloud (Financials, Distribution, Manufacturing, HCM) Have experience (Understanding of Data Model and Business process functionality and its data flow) in Oracle Fusion and Oracle On-Premise Applications (Finance or Supply chain) Developing integrations using OIC, VBCS, Rest APIs/Web Services Experts in PaaS such as Oracle Analytics Cloud Services, Visual Builder Cloud Services, Oracle Integration Services Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. At Circle K, we are building a best-in-class global data engineering practice to support intelligent business decision-making and drive value across our retail ecosystem. As we scale our engineering capabilities, we’re seeking a Lead Data Engineer to serve as both a technical leader and people coach for our India-based Data Enablement pod. This role will oversee the design, delivery, and maintenance of critical cross-functional datasets and reusable data assets while also managing a group of talented engineers in India. This position plays a dual role: contributing hands-on to engineering execution while mentoring and developing engineers in their technical careers. About The Role The ideal candidate combines deep technical acumen, stakeholder awareness, and a people-first leadership mindset. You’ll collaborate with global tech leads, managers, platform teams, and business analysts to build trusted, performant data pipelines that serve use cases beyond traditional data domains. Responsibilities Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines) Architect data models and re-usable layers consumed by multiple downstream pods Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks Mentoring and coaching team Partner with product and platform leaders to ensure engineering consistency and delivery excellence Act as an L3 escalation point for operational data issues impacting foundational pipelines Own engineering best practices, sprint planning, and quality across the Enablement pod Contribute to platform discussions and architectural decisions across regions Job Requirements Education Bachelor’s or master’s degree in computer science, Engineering, or related field Relevant Experience 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse Knowledge And Preferred Skills Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices. Solid grasp of data governance, metadata tagging, and role-based access control. Proven ability to mentor and grow engineers in a matrixed or global environment. Strong verbal and written communication skills, with the ability to operate cross-functionally. Certifications in Azure, Databricks, or Snowflake are a plus. Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, Snowflake, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use : Databricks, Azure SQL DW/Synapse, Snowflake, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and implement scalable data pipelines using Azure Databricks Develop PySpark-based data transformations and integrate structured and unstructured data from various sources Optimize Databricks clusters for performance, scalability, and cost-efficiency within the Azure ecosystem Monitor, troubleshoot, and resolve performance bottlenecks in Databricks workloads Manage orchestration and scheduling of end to end data pipeline using tool like Apache airflow, ADF scheduling, logic apps Effective collaboration with Architecture team in designing solutions and with product owners with validating the implementations Implementing best practices to enable data quality, monitoring, logging and alerting the failure scenarios and exception handling Documenting step by step process to trouble shoot the potential issues and deliver cost optimized cloud solutions Provide technical leadership, mentorship, and best practices for junior data engineers Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.Tech or equivalent 7+ years of overall experience in IT industry and 6+ years of experience in data engineering with 3+ years of hands-on experience in Azure Databricks Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git) Solid knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing Solid expertise in the Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions Proficiency in PySpark, Python and SQL for data processing in Databricks Proven excellent written and verbal communication skills Proven excellent problem-solving skills and ability to work independently Proven ability to balance multiple and competing priorities and execute accordingly Proven highly self-motivated with excellent interpersonal and collaborative skills Proven ability to anticipate risks and obstacles and develop plans for mitigation Proven excellent documentation experience and skills Preferred Qualifications Azure certifications DP-203, AZ-304 etc. Experience in infrastructure as code, scheduling as code, and automating operational activities using Terraform scripts At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Role: Data Platform Architect Location: India(remote) Type of Employment: Contract Required Skills: C, C#, .NET ,Python, SCALA, Data bricks, ADF, event hub, event grid, adls, adx, data explorer, fluent d, azure app services, architect new solutions Responsibilities: - Design and architect scalable data platform solutions. - Lead implementation of data pipelines and integration workflows. - Collaborate with stakeholders to define data strategies. - Ensure platform performance, security, and compliance. - Hands on experience architecting solutions for Data Platform holding 1-2 Petabytes of Data Required Skills: - Strong experience in data engineering and architecture - Expertise in modern data engineering tools and Azure services. - Strong programming background in C-family languages and Python. - Strong problem-solving and analytical skills. - Excellent communication and teamwork abilities. - Azure certifications, experience with real-time data processing. Preferred: - Knowledge of industry best practices and standards. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Career Level - IC4 Responsibilities Education & Experience: BE, BTech, MCA , CA or equivalent preferred. Other qualifications with adequate experience may be considered. 5+ years relevant working experience ##Functional/Technical Knowledge & Skills: Must have good understanding of the following Oracle Cloud Financials version 12+ capabilities: We are looking for a techno-functional person who has real-time hands-on functional/product and/or technical experience; and/or worked with L2 or L3 level support; and/or having equivalent knowledge. We expect candidate to have: Strong business processes knowledge and concepts. Implementation/Support experience on either of the area - ERP - Cloud Financial Modules like GL, AP, AR, FA, IBY, PA, CST, ZX and PSA or HCM - Core HR, Benefits, Absence, T&L, Payroll, Compensation, Talent Management or SCM - Inventory, OM, Procurement Candidate must have hands on experience minimum in any of the 5 modules on the above pillars. Ability to relate the product functionality to business processes, and thus offer implementation advices to customers on how to meet their various business scenarios using Oracle Cloud Financials. Technically Strong with Expert Skills in SQL, PLSQL, OTBI/ BIP/FRS reports, FBDI, ADFDI, BPM workflows, ADF Faces, BI Extract for FTP, Payment Integration and Personalisation. Ability to relate the product functionality to business processes, and thus offer implementation advice to customers on how to meet their various business scenarios using Oracle Cloud. Strong problem solving skills. Strong Customer interactions and service orientation so you can understand customer’s critical situations and accordingly provide the response, and mobilise the organisational resources, while setting realistic expectations to customers. Strong operations management and innovation orientation so you can continually improve the processes, methods, tools, and utilities. Strong team player so you leverage each other’s strengths. You will be engaged in collaboration with peers within/across the teams often. Strong learning orientation so you keep abreast of the emerging business models/processes, applications product solutions, product features, technology features – and use this learning to deliver value to customers on a daily basis. High flexibility so you remain agile in a fast changing business and organisational environment. Create and maintain appropriate documentation for architecture, design, technical, implementation, support and test activities. # Personal Attributes: Self driven and result oriented Strong problem solving/analytical skills Strong customer support and relation skills Effective communication (verbal and written) Focus on relationships (internal and external) Strong willingness to learn new things and share them with others Influencing/negotiating Team player Customer focused Confident and decisive Values Expertise (maintaining professional expertise in own discipline) Enthusiasm Flexibility Organizational skills Values and enjoys coaching/knowledge transfer ability Values and enjoys teaching technical courses Note: Shift working is mandatory. Candidate should be open to work in evening and night shifts on rotation basis. Career Level - IC3/IC4/IC5 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 9.0 years

15 - 16 Lacs

Gurugram

Work from Office

Naukri logo

Strong experience in SQL development with solid expertise in AWS cloud services. Proficient in Azure Data Factory (ADF) for building and managing data pipelines in cloud-based data integration solutions. Mail:kowsalya.k@srsinfoway.com

Posted 2 weeks ago

Apply

7.0 years

4 - 9 Lacs

Gurgaon

Remote

GlassDoor logo

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. At AHEAD, we prioritize creating a culture of belonging, where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer, and do not discriminate based on an individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, marital status, or any other protected characteristic under applicable law, whether actual or perceived. We embrace all candidates that will contribute to the diversification and enrichment of ideas and perspectives at AHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients’ advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHub’s, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling – SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHub’s, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include: Medical, Dental, and Vision Insurance 401(k) Paid company holidays Paid time off Paid parental and caregiver leave Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (“OTE”) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidate’s relevant experience, qualifications, and geographic location.

Posted 2 weeks ago

Apply

5.0 years

4 - 16 Lacs

Gurgaon

On-site

GlassDoor logo

Position: SQL+ ADF (Azure Data Factory) Experience Required: Minimum 5+ Years Location: Gurgaon/ Hybrid Job Type: Permanent Work Timings: 1PM – 10 PM Notice Period: Immediate only Mode of Interview: Virtual Required Experience: Must have strong experience in SQL development. Must have experience in AWS Cloud. Must have experience in ADF (Azure Data factory) Job Type: Full-time Pay: ₹400,000.00 - ₹1,600,000.00 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and implement scalable data pipelines using Azure Databricks Develop PySpark-based data transformations and integrate structured and unstructured data from various sources Optimize Databricks clusters for performance, scalability, and cost-efficiency within the Azure ecosystem Monitor, troubleshoot, and resolve performance bottlenecks in Databricks workloads Manage orchestration and scheduling of end to end data pipeline using tool like Apache airflow, ADF scheduling, logic apps Effective collaboration with Architecture team in designing solutions and with product owners with validating the implementations Implementing best practices to enable data quality, monitoring, logging and alerting the failure scenarios and exception handling Documenting step by step process to trouble shoot the potential issues and deliver cost optimized cloud solutions Provide technical leadership, mentorship, and best practices for junior data engineers Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: B.Tech or equivalent 7+ years of overall experience in IT industry and 6+ years of experience in data engineering with 3+ years of hands-on experience in Azure Databricks Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git) Solid knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing Solid expertise in the Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions Proficiency in PySpark, Python and SQL for data processing in Databricks Proven excellent written and verbal communication skills Proven excellent problem-solving skills and ability to work independently Proven ability to balance multiple and competing priorities and execute accordingly Proven highly self-motivated with excellent interpersonal and collaborative skills Proven ability to anticipate risks and obstacles and develop plans for mitigation Proven excellent documentation experience and skills Preferred Qualifications: Azure certifications DP-203, AZ-304 etc. Experience in infrastructure as code, scheduling as code, and automating operational activities using Terraform scripts At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Experience: 5+ years Notice Period: Immediate joiners Work Timings: 1PM – 10 PM Location: Gurgaon Work Mode: Hybrid Strong experience in SQL development along with experience in cloud AWS and good experience in ADF Role Responsibilities Design, develop, and implement database solutions using SQL Server and Azure Data Factory (ADF). Create and optimize complex SQL queries to ensure efficiency and effectiveness in data fetching. Build and manage data pipelines using ADF for data ingestion and transformation. Collaborate with stakeholders to gather requirements and understand data needs. Perform database maintenance tasks, including backups and recovery. Analyze and enhance SQL performance to reduce execution time. Work with Data Analysts to help visualize data and support reporting needs. Conduct code reviews and provide constructive feedback to peers. Document development processes and ensure adherence to best practices. Support system testing and troubleshoot issues as they arise. Participate in team meetings to discuss project updates and challenges. Ensure data security and compliance with relevant regulations. Continuously learn and apply the latest industry trends in database technologies. Assist in training junior developers and onboarding new team members. Contribute to agile project management by updating task progress in tracking tools. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 3 years of experience as a SQL Developer. Proficiency in SQL Server and ADF. Hands-on experience with ETL tools and methodologies. Strong understanding of database architectures and design. Familiarity with cloud technologies, especially Azure. Experience with version control systems like Git. Solid analytical and problem-solving skills. Ability to work effectively in a collaborative team environment. Excellent verbal and written communication skills. Knowledge of data warehousing concepts is a plus. Certifications in SQL or Azure Data Services are an advantage. Capability to handle multiple tasks and prioritize effectively. Detail-oriented with a focus on quality deliverables. Commitment to continuous learning and professional development. " Skills: agile methodologies,performance tuning,data analysis,adf,azure data factory,cloud technologies,data warehousing,git,problem solving,azure data factory (adf),sql server,database management,azure,sql,team collaboration,version control systems,etl tools Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem-solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Primary skills: Technology Big Data Big Table | Technology Cloud Integration | Azure Data Factory (ADF) | Technology | Data on Cloud - Platform | AWS Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Anupgarh, Rajasthan, India

Remote

Linkedin logo

Somos una corporación multinacional de bebidas y alimentos fundada en 1885 con operaciones en más 14 países, con más de 15,000 colaboradores. Tenemos el portafolio de bebidas más grande de la región, y contamos con socios estratégicos como PepsiCo y AB InBev. El último año hemos tenido una expansión a nivel global que nos ha llevado a dividirnos en 4 unidades de negocio: apex (transformación), cbc (distribución), beliv (innovación en bebidas) y bia (alimentos); y como parte de nuestra estrategia dinámica de expansión y crecimiento requerimos talentos para unirse a nuestra corporación. Apply directly at getonbrd.com. Funciones del cargo Diseñar e implementar soluciones de ingeniería de datos escalables, eficientes y mantenibles utilizando tecnologías como Azure Data Factory (ADF), Databricks y Unity Catalog, aplicando arquitecturas por capas (Bronze/Silver/Gold), automatización de ETL/ELT con validación de calidad de datos, y estrategias de integridad como pipelines idempotentes y manejo de SCD. El objetivo es garantizar datos confiables, optimizados en costos y performance, alineados con las necesidades del negocio, respaldados por documentación robusta y estándares de código (PEP8, Git) para facilitar su evolución y gobierno. Requerimientos del cargo Coordinar el funcionamiento de los distintos entornos donde se ejecutan los procesos de procesamiento de datos. Extraer, transformar y cargar los datos para que estén alineados con respecto a las necesidades del negocio. Generar integraciones eficientes que permitan realizar la ingesta de datos requeridos para la lógica de negocio. Generar flujos de integración continua que permitan validar los flujos desarrollados de forma eficaz. Mentorizar a igenieros juniors en buenas practicas y soluciones escalables. Proponer e implementar mejoras tecnologicas que optimicen los flujos de datos. Principales Retos Requiere criterio para diseñar, implementar y mantener una estructura de datos eficiente, escalable e intuitiva. Requiere criterio y experiencia para cumplir con las mejores prácticas de código para el desarrollo de funcionalidades competitivas en el mercado. Procesar volúmenes de datos en crecimiento exponencial sin que los costos en la nube se disparen. Implementar mecanismos de data quality que no impacten la velocidad de los procesamientos. GETONBRD Job ID: 53848 Conditions Health coverage Global Mobility Apex, S.A. pays or copays health insurance for employees. Computer provided Global Mobility Apex, S.A. provides a computer for your work. Informal dress code No dress code is enforced. Remote work policy Locally remote only Position is 100% remote, but candidates must reside in Chile, Colombia, Ecuador, Peru, Mexico, Guatemala or El Salvador. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Analyst (Snowflake) Job ID: POS-10121 Primary Skill: Python Location: Hyderabad Experience Secondary skills: Snowflake, ADF, and SQL Mode of Work: Work from Office Experience : 5-7 Years About The Job Are you someone with an in-depth understanding of ETL and a strong background in developing Snowflake and ADF ETL-based solutions who develop, document, unit test, and maintain ETL applications and deliver successful code meeting customer expectations? If yes, this opportunity can be the next step in your career. Read on. We are looking for a Snowflake and ADF developer to join our Data leverage team – a team of high-energy individuals who thrive in a rapid-pace and agile product development environment. As a Developer, you will provide accountability in the ETL and Data Integration space, from the development phase through delivery. You will work closely with the Project Manager, Technical Lead, and client teams. Your prime responsibilities will be to develop bug free code with proper unit testing and documentation. You will provide inputs to planning, estimation, scheduling, and coordination of technical activities related to ETL-based applications. You will be responsible for meeting development schedules and delivering high-quality ETL-based solutions that meet technical specifications and design requirements ensuring customer satisfaction. You are expected to possess good knowledge in tools – Snowflake and ADF. Know Your Team At ValueMomentum’s Engineering Center , we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development, leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Responsibilities Developing Modern Data Warehouse solutions using Snowflake and ADF. Ability to provide solutions that are forward-thinking in the data engineering and analytics space. Good understanding of star and snowflake dimensional modeling. Good knowledge of Snowflake security, Snowflake SQL, and designing other Snowflake objects Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Taks, Streams, Time travel, Cloning, Optimizer, data sharing, stored procedures, and UDFs. Good understanding of Databricks Data and Databricks Delta Lake Architecture. Experience in Azure Data Factory (ADF) to design, implement and manage complex data integration and transformation workflow. Good understanding of SDLC and Agile Methodologies. Strong problem-solving skills and analytical skills with proven strength in applying root-cause analysis. Ability to communicate verbally and in technical writing to all levels of the organization. Strong teamwork and interpersonal skills at all levels. Dedicated to excellence in one’s work; strong organizational skills; detail-oriented and thorough. Hands-on experience on support activities, able to create and resolve tickets – Jira, ServiceNow, Azure DevOps. Requirements Strong experience in Snowflake and ADF. Experience of working in an Onsite/Offshore model. 5+ years of experience in Snowflake and ADF development. About The Company Headquartered in New Jersey, US, ValueMomentum is the largest standalone provider of IT Services and Solutions to Insurers. Our industry focus, expertise in technology backed by R&D, and our customer-first approach uniquely position us to deliver the value we promise and drive momentum to our customers’ initiatives. ValueMomentum is amongst the top 10 insurance-focused IT services firms in North America by number of customers. Leading Insurance firms trust ValueMomentum with their Digital, Data, Core, and IT Transformation initiatives. Benefits We at ValueMomentum offer you a congenial environment to work and grow in the company of experienced professionals. Some benefits that are available to you are: Competitive compensation package. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Show more Show less

Posted 3 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Experience: 7 to 10 years Mandatory Skills: Data Model Design, ER-diagram, Data Warehouse, Data Strategy, Hands on experience in Design and Architecture for enterprise data application. Good to have: Python, PySpark, Databricks, Azure Services(ADLS, ADF, ADB) Good communication and Problem-solving skills. Some understanding on CPG domain. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

K&K Talents is an international recruiting agency that has been providing technical resources globally since 1993. This position is with one of our clients in India , who is actively hiring candidates to expand their teams. Title: Data Engineer (SQL & ADF) Location: Gurgaon, India - Hybrid Employment Type: Full-time Permanent Notice Period: Immediate Role: We are seeking a skilled and proactive SQL + ADF Developer to join our client data engineering team. The ideal candidate will have strong hands-on experience in SQL development , Azure Data Factory (ADF) , and working knowledge of AWS cloud services . You will be responsible for building and maintaining scalable data integration solutions that support our business intelligence and analytics needs. Responsibilities: Develop, optimize, and maintain complex SQL queries, stored procedures , and scripts for large-scale data operations. Design and implement data pipelines using Azure Data Factory (ADF) for ETL/ELT processes. Integrate and move data between on-premise and cloud-based sources (Azure/AWS). Work with AWS services (e.g., S3, RDS, Glue, Lambda) for hybrid-cloud data workflows. Collaborate with data analysts, architects, and business teams to understand data requirements. Monitor, debug, and optimize ADF pipelines for performance and reliability. Document data flows, logic, and pipeline configurations for operational transparency. Participate in code reviews and follow data engineering best practices. Required Skills: Experience in SQL development , including performance tuning and stored procedures. Hands-on experience with Azure Data Factory (ADF) and building data pipelines. Working experience with AWS cloud services for data storage or movement. Experience with relational databases such as SQL Server, PostgreSQL, or MySQL. Good understanding of data integration concepts, scheduling, and monitoring. Strong problem-solving and analytical skills. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS! Job Title: Data Scientist / AI/ML Engg Required Skillset: AI/ML Location: Hyd,Kolkata, Delhi, Chennai(Last option) Experience Range: 6-10 Job Description Must-Have** AI/ML, Azure ML Studio, AI/ML On Databricks, Python & CICD Devops. Supervised and unsupervised ML and Predictive Analytics using Python • Feature generation through data exploration and SME requirements • Relational database querying • Applying computational algorithms and statistical methods to structured and unstructured data • Communicating results through data visualizations Programming Languages: Python, PySpark • Big Data Technologies: Spark with PySpark • Cloud Technologies: Azure (ADF, Databricks, Storage Account Usage, WebApp, Key vault, SQL Server, function app, logic app, Synapse, Azure Machine Learning, Azure DevOps) • RBAC Maintenance for Azure roles. • Github branching and managing • Terraform scripting for Azure IAA • Optional: GCP (Big Query, DataProc, Cloud Storage Thanks & Regards, Ria Aarthi A. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Role Description Job Summary: We are seeking an experienced ADF Developer to design, build, and maintain data integration solutions using Azure Data Factory with exposure to Azure Databricks (ADB) . The ideal candidate will have hands-on expertise in ETL pipelines , data engineering , and Azure cloud services to support enterprise data initiatives. Key Responsibilities Design and develop scalable ETL pipelines using ADF. Integrate ADB for advanced data transformation tasks. Optimize and troubleshoot ADF pipelines and queries (SQL, Python, Scala). Implement robust data validation, error handling, and performance tuning. Collaborate with data architects, analysts, and DevOps teams. Maintain technical documentation and support ongoing solution improvements. Required Qualifications Bachelor’s/Master’s in Computer Science or related field. 2+ years of hands-on ADF experience. Strong skills in Python, SQL, and/or Scala. Familiarity with ADB and Azure cloud services. Solid knowledge of ETL, data warehousing, and performance optimization. Preferred Microsoft Azure Data Engineer certification. Exposure to Spark, Hadoop, Git, Agile practices, and domain-specific projects (finance, healthcare, retail). Understanding of data governance and compliance. Skills Adf,Adb,Datastage Show more Show less

Posted 3 weeks ago

Apply

Exploring ADF Jobs in India

The job market for ADF (Application Development Framework) professionals in India is witnessing significant growth, with numerous opportunities available for job seekers in this field. ADF is a popular framework used for building enterprise applications, and companies across various industries are actively looking for skilled professionals to join their teams.

Top Hiring Locations in India

Here are 5 major cities in India where there is a high demand for ADF professionals: - Bangalore - Hyderabad - Pune - Chennai - Mumbai

Average Salary Range

The estimated salary range for ADF professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

In the ADF job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and Architect. As professionals gain more experience and expertise in ADF, they can progress to higher-level positions with greater responsibilities.

Related Skills

In addition to ADF expertise, professionals in this field are often expected to have knowledge of related technologies such as Java, Oracle Database, SQL, JavaScript, and web development frameworks like Angular or React.

Interview Questions

Here are 25 interview questions for ADF roles, categorized by difficulty level: - Basic: - What is ADF and its key features? - What is the difference between ADF Faces and ADF Task Flows? - Medium: - Explain the lifecycle of an ADF application. - How do you handle exceptions in ADF applications? - Advanced: - Discuss the advantages of using ADF Business Components. - How would you optimize performance in an ADF application?

Closing Remark

As you explore job opportunities in the ADF market in India, make sure to enhance your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and mindset, you can excel in your ADF career and secure rewarding opportunities in the industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies