Jobs
Interviews

1656 Adf Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Summary Position Summary Enterprise Performance The Enterprise Performance Portfolio is a collection of Offerings that helps clients achieve the maximum possible impact and value from their investments in Finance, Supply Chain and IT operations. By taking a holistic view of these key business functions from strategy articulation through process design and technology enablement, we can help our clients navigate their challenges while operating components of their business. As our clients drive towards their digital future, Finance, Supply Chain, and IT play an increasingly important role in how these organizations interact with their customers, suppliers, and other key stakeholders. By combining our strategy, operations improvement, implementation, and operate capabilities, we can be more creative in how we deploy our resources and drive innovation at market pace. Oracle Offering Our Oracle Enterprise Solutions practice provides services from ERP and Cloud Strategy, through Business Transformation and Applications Implementation, to Operate and Cloud Release Management. We modernize our client’s business and core environments to leverage technology innovations around Cloud, Digital, Mobility and Social Collaboration. We help our clients address digital transformation by designing modern applications and industry specific solutions to deliver outcomes that improve flexibility, scalability and cost management. Oracle ERP products include Oracle Retail, Oracle Cloud SaaS, EBS, PeopleSoft, and JD Edwards. Job Location: Any of Deloitte USI office location Role/Job Description: Excellent communication skills (Written and verbal) and should be able to independently engage with business stakeholders throughout various phases of Oracle RMS(MFCS)/ReIM(IMCS)/ ReSA(SACS)/RPM(PCS)/ Allocations/ RIB (RICS)/ XStore /RPAS/RDF/AIP implementation project Design and develop Oracle Retail FRICEW components Able to work on Oracle tools SQL, PL/SQL, Pro*C, UNIX Shell Scripts Design and develop Batch programs using Shell script and Pro*C, BI reports and ADF forms Develop good understanding on industry leading practice and independently derive the solution Identify and understand architectural pieces, provide direction and technical expertise in design, development and systems integration Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation Able to provide fucntional training to teams when required and serve as a technical mentor to team members Provide timely updates to the team leads/supervisor/project manager Key Skills 3 to 6 years of Functional experience in Oracle Retail implementation/ upgrade / Support projects Should have worked on at least two of the modules such as RMS (MFCS), ReIM (IMCS), ReSA (SACS), RTM, RPM (PCS), Allocations, RIB (RICS), XStore, RPAS,RDF, AIP, MFP modules. Should have strong expertise in SQL, PL/SQL, Pro*C, Shell script and analytical skills Should have hands on experience in FRICEW components Experience in designing, developing, deploying, testing and maintaining technical FRICEW objects Good communication skills (Verbal and Written) ,to be able to articulate the problems and make design decision Should of good understanding of oracle implementation methodologies such as AIM Knowledge in OIC, ADF, ODI, Informatica, OBIEE, SOA or Oracle Retail Cloud Infrastructure will be an added advantage How you’ll grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Referral Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 305997

Posted 1 month ago

Apply

12.0 - 18.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Project Manager for Reputed US IT MNC, Hyderabad Please share your CV to jobs@kamms.net Title: Project Manager/ Scrum master Overall 10+ years of experience Strong knowledge of the business implications of an Agile transformation Good experience in Azure DevOps Project management and JIRA project management Demonstrate good interpersonal skills and communication, both written and spoken Deep understanding of agile metrics (tasks, backlog tracking, burndown metrics, velocity, user stories etc.) to analyze and improve sprint planning Hands-on expertise with the Atlassian product suite (Jira Align, JIRA, Confluence, and others) Strong executive presence, facilitation skills, drive for results, attention to quality and detail, and a collaborative attitude Strong familiarity with the principles of agile , business agility, and lean development methodologies (Agile, Scrum , Kanban , SAFe etc.) Advance the use of Agile and Scaled Agile practices Maintain plans, issues, risks program dashboards in Jira. Experience with Agile, Cloud Practices and Processes PMP certification Stay tightly integrated with assigned teams and actively remove impediments and blockers to software delivery teams. Identify risks/issues and implement mitigation and contingency plans; focusing on early mitigation or elimination of risks/issues. Provide written and verbal status reports for leadership levels

Posted 1 month ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Description This is a remote position. Job Description We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Hiring a Full Stack Developer for a 6-month remote contractual role. The ideal candidate will have 46 years of experience in full-stack development using React.js, Node.js or Python (Flask), and strong backend experience in SQL and ETL. Familiarity with the Azure Cloud Platform, including Azure Databricks, ADF, and other Azure services, is required. You will be responsible for building scalable web applications, developing RESTful APIs, integrating with cloud-based data workflows, and ensuring the performance and security of deployed solutions. Strong communication skills and hands-on expertise in version control tools like Git are expected. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 1 month ago

Apply

7.0 - 10.0 years

20 - 27 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

SSIS,SSRS Developer for Reputed US IT MNC, Hyderabad If you are highly experience with 6 years of hands on experience on SSIS,SSRS Please share your CV to jobs@kamms.net Position Title: SSIS,SSRS Developer Position Type: Permanent Job Location: Hyderabad, Che, Bangalore , Experience: 7+ Years Mode: office Deep knowledge of RDBMS systems SQL coding and optimizing programmable objects like stored procedures and functions SQL Server Designing, implementing, and materializing views Develop database schemas, tables and dictionaries using SQL server 2012 , and 2016. Experience creating Enterprise Data systems on Azure. Deep knowledge in Azure (Azure Data Factory, Azure Data Bricks, Azure Synapse, Azure SQL DW, Azure Analysis Services, Azure Logic Apps, Azure Storage Account, Azure Automation Account, Azure Machine Learning) Writing automation packages for ETL processes, e.g. SSIS or Azure Data Factory Should be very strong in SQL and understand streamline complex SQL logic Design, implement and maintain SSIS jobs. Should be good troubleshooter. Develop and prepare strategies for Business Intelligence processes. Manage and customize all ETL processes as per customer requirement and analyze all processes for same. Participate in business requirement conversations and translate into a model design that will meet their needs. Use acquired knowledge and develop Logical Data Models, Physical Data Models, and Data Element Dictionaries Translate Business requirements into Data Requirements. Understand the data elements and their domains; understand relationships among those elements and inter-dependencies between models. Extensive expertise in MS SQL server, MySQL Working knowledge of RDBMS, dimensional modeling, and data warehousing techniques Must understand relational databases and be able to write and execute SQL statements

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition Number: 100605 Data Engineer III Shift: 2:00 pm- 11:00 PM IST Location: Delhi NCR, Hyderabad, Bangalore, Pune, Chennai, this is a hybrid work opportunity. Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organizations through complex digital decisions. About The Role We are seeking a Data Engineer III with Insight you will focus on delivering the vision of the customer through process, development, and lifecycle. We will count on you to work directly with stakeholders and team members to define, design, and deliver actionable insights while working with ETL developers and BI developers in an agile environment... Along the way, you will get to: Create data pipelines through ADF, Azure Databricks, and Synapse Analytics Fixing and improving existing data Ingestion Pipelines for customers Data Engineering with Azure Data factory, Azure SQL, Azure Data Lake, Azure Databricks, Microsoft Fabric & MS Power BI Develop and optimize data pipelines using Microsoft Fabric's data integration tools Champion data quality, integrity, and reliability throughout the organization by designing and promoting best practices Be Ambitious : This opportunity is not just about what you do today but also about where you can go tomorrow. As a Data Engineer, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. What We’re Looking For Data Engineer III With 6+ years of experience with ETL Process, Data warehouse architecture 6+ Years of experience with Azure Data services i.e. ADF, ADLS Gen 2, Azure SQL dB, Synapse, Azure Databricks, Microsoft Fabric 5+ years of experience designing business intelligence solutions Strong proficiency in SQL and Python/pyspark Implementation experience of Medallion architecture and delta lake (or lakehouse) Experience with cloud-based data platforms, preferably Azure Familiarity with big data technologies and data warehousing concepts Working knowledge of Azure DevOps and CICD (build and release) What You Can Expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance But what really sets us apart are our core values of Hunger, Heart, and Harmony, which guide everything we do, from building relationships with teammates, partners, and clients to making a positive impact in our communities. Join us today, your ambITious journey starts here. Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. When you apply, please tell us the pronouns you use and any reasonable adjustments you may need during the interview process. At Insight, we celebrate diversity of skills and experience so even if you don’t feel like your skills are a perfect match - we still want to hear from you! Today's Talent Leads Tomorrow's Success. Learn More About Insight https://www.linkedin.com/company/insight/ Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Insight India Location:Level 16, Tower B, Building No 14, Dlf Cyber City In It/Ites Sez, Sector 24 &25 A Gurugram Gurgaon Hr 122002 India

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 4 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Must have skills: Azure data factory, Azure data bricks, Python and Pyspark Expert with database technologies and ETL tools. Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Python, Pyspark etc. Good knowledge of AZURE, AWS, GCP Cloud platform services stack Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Delta lake, Databricks workflows orchestration, Python, Pyspark etc. Good Knowledge on Unity Catalog implementation. Good Knowledge on integration with other tools like – DBT, other transformation tools. Good knowledge on Unity Catalog integration with Snowlflake Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 6:37:59 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 7:33:07 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

0 years

4 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Associate – Data Engineer! We are seeking a highly analytical, detail-oriented and hands-on Data Engineer to join our team. The ideal candidate will have a strong background in data engineering, with expertise in Pyspark, Spark SQL Databricks. Key Responsibilities: 1. Strong hands-on experience with Pyspark , Spark SQL Databricks 2. Extensive knowledge on big data concepts on delta tables, DLT, cluster management, performance management in ADB. 3. Should be able to write complex SQL queries. 4. Strong hands-on experience Azure Data Factory. 5. Knowledge on DevOps and Agile methodologies-based projects, implement the requirements using ADF/Data Bricks 6. Knowledge of version control tools such as ADO. 7. Good Communication skills Qualifications we seek in you! Minimum Qualifications B.E./B Tech/BCA/MCA Hands-on experience with insurance domain and production support Experience of working with tools like ServiceNow, JIRA Excellent communication and interpersonal skills. Ability to work collaboratively in a team environment and interact with stakeholders at different levels . Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Associate Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 6:30:49 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Lead Developer! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Overall <<>>> years of experience in IT Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 1 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 30, 2025, 6:54:44 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

5.0 years

7 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Data Engineering to join our team. The ideal candidate will have a Solid background in programming, data management, and cloud infrastructure, with a focus on designing and implementing efficient data solutions. This role requires a minimum of 5+ years of experience and a deep understanding of Azure services and infrastructure and ETL/ELT solutions. Primary Responsibilities: Azure Infrastructure Management: Own and maintain all aspects of Azure infrastructure, recommending modifications to enhance reliability, availability, and scalability Security Management: Manage security aspects of Azure infrastructure, including network, firewall, private endpoints, encryption, PIM, and permissions management using Azure RBAC and Databricks roles Technical Troubleshooting: Diagnose and troubleshoot technical issues in a timely manner, identifying root causes and providing effective solutions Infrastructure as Code: Create and maintain Azure Infrastructure as Code using Terraform and GitHub Actions CI/CD Pipelines: Configure and maintain CI/CD pipelines using GitHub Actions for various Azure services such as ADF, Databricks, Storage, and Key Vault Programming Expertise: Utilize your expertise in programming languages such as Python to develop and maintain data engineering solutions Generative AI and Language Models: Knowledge of Language Models (LLMs) and Generative AI is a plus, enabling the integration of advanced AI capabilities into data workflows Real-Time Data Streaming: Use Kafka for real-time data streaming and integration, ensuring efficient data flow and processing Data Management: Proficiency in Snowflake for data wrangling and management, optimizing data structures for analysis DBT Utilization: Build and maintain data marts and views using DBT, ensuring data is structured for optimal analysis ETL/ELT Solutions: Design ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks, leveraging methodologies to acquire data from various structured or semi-structured source systems Communication: Solid communication skills to explain technical issues and solutions clearly to the Engineering Lead and key stakeholders (as required) Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 5+ years of experience in designing ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks.OR Snowflake Expertise in programming languages such as Python Experience with Kafka for real-time data streaming and integration Experience in creating and configuring CI/CD pipelines using GitHub Actions for various Azure services Experience in creating and maintaining Azure Infrastructure as Code using Terraform and GitHub Actions Proficiency in Snowflake for data wrangling and management Proven ability to use DBT to build and maintain data marts and views Ability to configure, set up, and maintain GitHub for various code repositories In-depth understanding of managing security aspects of Azure infrastructure Solid problem-solving skills and ability to diagnose and troubleshoot technical issues Proven excellent communication skills for explaining technical issues and solutions At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Responsibilities Design, construct, and maintain scalable data management systems using Azure Databricks, ensuring they meet end-user expectations. Supervise the upkeep of existing data infrastructure workflows to ensure continuous service delivery. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python and other Databricks tools. Oversee and lead the module through planning, estimation, implementation, monitoring and tracking. Desired Skills And Experience Over 8 + years of experience in data engineering, with expertise in Azure Data Bricks, MSSQL, LakeFlow, Python and supporting Azure technology. Design, build, test, and maintain highly scalable data management systems using Azure Databricks. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL. Integrate Azure Databricks with other Azure services like Azure Data Lake Storage, Azure SQL Data Warehouse. Design and implement robust ETL pipelines using ADF and Databricks, ensuring data quality and integrity. Collaborate with data architects to implement effective data models and schemas within the Databricks environment. Develop and optimize PySpark/Python code for data processing tasks. Assist stakeholders with data-related technical issues and support their data infrastructure needs. Develop and maintain documentation for data pipeline architecture, development processes, and data governance. Data Warehousing: In-depth knowledge of data warehousing concepts, architecture, and implementation, including experience with various data warehouse platforms. Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients. Excellent problem-solving skills, with ability to work independently or as part of team. Strong communication and interpersonal skills, with ability to effectively engage with both technical and non-technical stakeholders. Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts. Key Responsibilities Interpret business requirements, either gathered or acquired. Work with internal resources as well as application vendors Designing, developing, and maintaining Data Bricks Solution and Relevant Data Quality rules Troubleshooting and resolving data related issues. Configuring and Creating Data models and Data Quality Rules to meet the needs of the customers. Hands on in handling Multiple Database platforms. Like Microsoft SQL Server, Oracle etc Reviewing and analyzing data from multiple internal and external sources Analyse existing PySpark/Python code and identify areas for optimization. Write new optimized SQL queries or Python Scripts to improve performance and reduce run time. Identify opportunities for efficiencies and innovative approaches to completing scope of work. Write clean, efficient, and well-documented code that adheres to best practices and Council IT coding standards. Maintenance and operation of existing custom codes processes Participate in team problem solving efforts and offer ideas to solve client issues. Query writing skills with ability to understand and implement changes to SQL functions and stored procedures. Effectively communicate with business and technology partners, peers and stakeholders Ability to deliver results under demanding timelines to real-world business problems. Ability to work independently and multi-task effectively. Configure system settings and options and execute unit/integration testing. Develop end-user Release Notes, training materials and deliver training to a broad user base. Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic. Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Key Responsibilities: A day in the life of an Infoscion As part of the Infosys consulting team your primary role would be to actively aid the consulting team in different phases of the project including problem definition effort estimation diagnosis solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys information available in public domains vendor evaluation information etc and build POCs You will create requirement specifications from the business needs define the to be processes and detailed functional designs based on requirements You will support configuring solution requirements on the products understand if any issues diagnose the root cause of such issues seek clarifications and then identify and shortlist solution alternatives You will also contribute to unit level and organizational initiatives with an objective of providing high quality value adding solutions to customers If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology AWS Devops Technology Cloud Integration Azure Data Factory ADF Technology Cloud Platform AWS Database Technology Cloud Platform Azure Devops Azure Pipelines Technology DevOps Continuous integration Mainframe Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining analyzing and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes identify improvement areas and suggest the technology solutions One or two industry domain knowledge Preferred Skills: Technology->DevOps->Continuous integration - Mainframe,Technology->Cloud Platform->Azure Devops->Azure Pipelines,Technology->Cloud Platform->AWS Database,Technology->Cloud Integration->Azure Data Factory (ADF)

Posted 1 month ago

Apply

6.0 years

0 Lacs

Delhi

Remote

Full time | Work From Office This Position is Currently Open Department / Category: ENGINEER Listed on Jun 30, 2025 Work Location: NEW DELHI BANGALORE HYDERABAD Job Descritpion of Databricks Engineer 6 to 8 Years Relevant Experience We are looking for an experienced Databricks Engineer to join our data engineering team and contribute to designing and implementing scalable data solutions on the Azure platform. This role involves working closely with cross-functional teams to build high-performance data pipelines and maintain a modern Lakehouse architecture. Key Responsibilities: Design and develop scalable data pipelines using Spark-SQL and PySpark in Azure Databricks. Build and maintain Lakehouse architecture using Azure Data Lake Storage (ADLS) and Databricks. Perform comprehensive data preparation tasks, including: Data cleaning and normalization Deduplication Type conversions Collaborate with the DevOps team to deploy and manage solutions in production environments. Partner with Data Science and Business Intelligence teams to share insights, align on best practices, and drive innovation. Support change management through training, communication, and documentation during upgrades, data migrations, and system changes. Required Qualifications: 5+ years of IT experience with strong exposure to cloud technologies, particularly in Microsoft Azure. Hands-on experience with: Databricks, Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS) Programming with PySpark, Python, and SQL Solid understanding of data engineering concepts, data modeling, and data processing frameworks. Ability to work effectively in distributed, remote teams. Excellent communication skills in English (both written and verbal). Preferred Skills: Strong working knowledge of distributed computing frameworks, especially Apache Spark and Databricks. Experience with Delta Lake and Lakehouse architecture principles. Familiarity with data tools and libraries such as Pandas, Spark-SQL, and PySpark. Exposure to on-premise databases such as SQL Server, Oracle, etc. Experience with version control tools (e.g., Git) and DevOps practices including CI/CD pipelines. Required Skills for Databricks Engineer Job Spark-SQL and PySpark in Azure Databricks Python SQL Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round

Posted 1 month ago

Apply

2.0 - 6.0 years

10 - 13 Lacs

Noida

On-site

Requirements Gathering & Data Analysis (~15%) Uncover Customer Needs: Actively gather customer requirements and analyze user needs to ensure software development aligns with real-world problems. Transform Needs into Action: Translate these requirements into clear and actionable software development tasks. Deep Collaboration: Collaborate daily with stakeholders across the project, including internal and external teams, to gain a comprehensive understanding of business objectives. Building the Foundation: System Architecture (~10%) Prototype & Analyze: Develop iterative prototypes while analyzing upstream data sources to ensure the solution aligns perfectly with business needs. Evaluate & Validate: Assess design alternatives, technical feasibility, and build proofs of concept to gather early user feedback and choose the most effective approach. Design for Scale: Craft a robust, scalable, and efficient database schema, documenting all architectural dependencies for future reference. Optimize Implementation: Translate functional specifications Write clean, well-documented, and efficient code (~55%): Technologies: Microsoft Fabric, Azure Synapse, Azure Data Explorer, along with other Azure services, Power BI, Machine Learning, Power Apps, Dynamic 365, HTML 5, and React. Azure Data Platform Specialist: Develop, maintain, and enhance data pipelines using Azure Data Factory (ADF) to streamline data flow. Analyze data models in Azure Analysis Services for deeper insights. Leverage the processing muscle of Azure Databricks for complex data transformations. Data Visualization Wizard: Craft compelling reports, dashboards, and analytical models using BI tools like Power BI to transform raw data into actionable insights. AI & Machine Learning Powerhouse: Craft and maintain cutting-edge machine learning models using Python to uncover hidden insights in data, predict future trends, and even integrate with powerful Large Language Models (LLMs) to unlock new possibilities. Full-Stack Rockstar: Build beautiful and interactive user interfaces (UIs) with the latest front-end frameworks like React, and craft powerful back-end code based on system specifications. Level up your coding with cutting-edge AI: Write code faster and smarter with AI-powered copilots that suggest code completions and help you learn the latest technologies. Quality Champion: Implement unit testing to ensure code quality and functionality. Utilize the latest frameworks and libraries to develop and maintain web applications that are efficient and reliable. Data-Driven Decisions: Analyze reports generated from various tools to identify trends and incorporate those findings into ongoing development for continuous improvement. Collaborative Code Craftsmanship: Foster a culture of code excellence through peer and external code reviews facilitated by Git and Azure DevOps. Automation Advocate: Automate daily builds for efficient verification and customer feedback, ensuring a smooth development process. Ensuring Seamless User Experience: Bridge the gap between defined requirements, business logic implemented in the database, and user experience to ensure users can easily interact with the data. Proactive Problem Solver: Proactively debug, monitor, and troubleshoot solutions to maintain optimal performance and a positive user experience. Quality Control and Assurance (10%) Code Excellence: Ensure code quality aligns with industry standards, best practices, and automated quality tools for maintainable and efficient development. Proactive Debugging: Continuously monitor, debug, and troubleshoot solutions to maintain optimal performance and reliability. End-to-End & Automated Testing: Implement automated testing frameworks to streamline testing processes, enhance coverage, and improve efficiency. Conduct comprehensive manual and automated tests across all stages of development to validate functionality, security, and user experience. AI-Powered Testing: Leverage AI-driven testing tools for intelligent test case generation. Collaborative Code Reviews: Foster a culture of excellence by conducting peer and external code reviews to enhance code quality and maintainability. Seamless Deployment: Oversee the deployment process, ensuring successful implementation and validation of live solutions. Continuous Learning & Skill Development (10%) Community & Training: Sharpen your skills by actively participating in technical learning communities and internal training programs. Industry Certifications: Earn industry-recognized certifications to stay ahead of the curve in in-demand technologies like data analysis, Azure development, data engineering, AI engineering, and data science (as applicable). Online Learning Platforms: Expand your skillset through online courses offered by platforms like Microsoft Learn, Coursera, edX, Udemy, and Pluralsight. Candidate Profile Eligible Branches: B. Tech./B.E. (CSE/IT) M. Tech./ M.E. (CSE/IT) Eligibility criteria: 60% plus or equivalent in Computer Science/Information Technology 2 to 6 years of software development experience Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,300,000.00 per year Application Question(s): Notice Period Work Location: In person

Posted 1 month ago

Apply

15.0 - 20.0 years

20 - 25 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Locations: Offices in Austin (USA), Singapore, Hyderabad, Indore, Ahmedabad (India) Primary Job Location: Hyderabad / Indore / Ahmedabad (India) Role Type: Full-time | Onsite What You Will Do Role Overview As a Data Governance Architect, you will define and lead enterprise-wide data governance strategies, design robust governance architectures, and enable seamless implementation of tools like Microsoft Purview, Informatica, and other leading data governance platforms. This is a key role bridging compliance, data quality, security, and metadata management across cloud and enterprise ecosystems. Key Responsibilities 1. Strategy, Framework, and Operating Model Define governance strategies, standards, and policies for compliance and analytics readiness. Establish a governance operating model with clear roles and responsibilities. Conduct maturity assessments and lead change management efforts. 2. 5. Metadata, Lineage & Glossary Management Architect technical and business metadata workflows. Validate end-to-end lineage across ADF Synapse Power BI. Govern glossary approvals and term workflows. 6. Policy & Data Classification Management Define and enforce rules for: Classification, Access, Retention, and Sharing. Leverage Microsoft Information Protection (MIP) for automation. Ensure alignment with GDPR, HIPAA, CCPA, SOX. 7. Data Quality Governance Define quality KPIs, validation logic, and remediation rules. Build scalable frameworks embedded in pipelines and platforms. 8. Compliance, Risk & Audit Oversight Establish compliance standards, dashboards, and alerts. Enable audit readiness and reporting through governance analytics. 9. Automation & Integration Automate workflows using: PowerShell, Azure Functions, Logic Apps, REST APIs. Integrate governance into: Azure Monitor, Synapse Link, Power BI, and third-party tools. Primary Skills Microsoft Purview Architecture & Administration Data Governance Framework Design Metadata & Data Lineage Management (ADF Synapse Power BI) Data Quality and Compliance Governance Informatica / Collibra / BigID / Alation / Atlan PowerShell, REST APIs, Azure Functions, Logic Apps RBAC, Glossary Governance, Classification Policies MIP, Insider Risk, DLP, Compliance Reporting Azure Data Factory, Agile Methodologies #Tags #DataGovernance #MicrosoftPurview #GovernanceArchitect #MetadataManagement #DataLineage #DataQuality #Compliance #RBAC #PowerShell #RESTAPI #Informatica #Collibra #BigID #Azure Functions #ADF #Synapse #PowerBI #GDPR #HIPAA #CCPA #SOX #OnsiteJobs #HyderabadJobs #IndoreJobs #AhmedabadJobs #HiringNow #DataPrivacy #EnterpriseArchitecture #DSPM #Governance Strategy #Information Security Would you like this JD tailored for a LinkedIn post, referral message, or email template as well

Posted 1 month ago

Apply

7.0 - 11.0 years

12 - 18 Lacs

Mumbai, Indore, Hyderabad

Work from Office

Key Responsibilities 1. Governance Strategy & Stakeholder Enablement Define and drive enterprise-level data governance frameworks and policies.Align governance objectives with compliance, analytics, and business priorities.Work with IT, Legal, Compliance, and Business teams to drive adoption.Conduct training, workshops, and change management programs. 2. Microsoft Purview Implementation & Administration Administer Microsoft Purview: accounts, collections, RBAC, and scanning policies.Design scalable governance architecture for large-scale data environments (>50TB).Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, and Snowflake. 3. Metadata & Data Lineage Management Design metadata repositories and workflows.Ingest technical/business metadata via ADF, REST APIs, PowerShell, Logic Apps.Validate end-to-end lineage (ADF Synapse Power BI), impact analysis, and remediation. 4. Data Classification & SecurityImplement and govern sensitivity labels (PII, PCI, PHI) and classification policies.Integrate with Microsoft Information Protection (MIP), DLP, Insider Risk, and Compliance Manager.Enforce lifecycle policies, records management, and information barriers.Working knowledge of GDPR, HIPAA, SOX, CCPA.Strong communication and leadership to bridge technical and business governance Location-Mumbai,Hyderabad,Indore,Ahmedabad

Posted 1 month ago

Apply

3.0 years

20 - 23 Lacs

Chennai, Tamil Nadu, India

On-site

We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks

Posted 1 month ago

Apply

6.0 - 11.0 years

16 - 27 Lacs

Noida, Pune, Bengaluru

Hybrid

Location: Mumbai/ Pune/ Bangalore/ Noida/ Kochi Job Description: Key Responsibilities: Collaborate with stakeholders to identify and gather reporting requirements, translating them into Power BI dashboards (in collaboration with Power BI developers). Monitor, troubleshoot, and optimize data pipelines and Azure services for performance and reliability. Follow best practices in DevOps to implement CI/CD pipelines. Document pipeline architecture, infrastructure changes, and operational procedures Required Skills Strong understanding of DevOps principles and CI/CD in Azure environments. Proven hands-on experience with: Azure Data Factory Azure Synapse Analytics Azure Function Apps Azure Infrastructure Services (Networking, Storage, RBAC, etc.) PowerShell scripting Experience in designing data workflows for reporting and analytics, especially integrating with Azure DevOps (ADO). --- Good to Have Experience with Azure Service Fabric. Hands-on exposure to Power BI or close collaboration with Power BI developers. Familiarity with Azure Monitor, Log Analytics, and other observability tools.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Required Skills: YOE-5+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins)

Posted 1 month ago

Apply

3.0 years

20 - 23 Lacs

Gurugram, Haryana, India

On-site

We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks

Posted 1 month ago

Apply

3.0 years

20 - 23 Lacs

Pune, Maharashtra, India

On-site

We are hiring a detail-oriented and technically strong ETL Test Engineer to validate, verify, and maintain the quality of complex ETL pipelines and Data Warehouse systems . The ideal candidate will have a solid understanding of SQL , data validation techniques , regression testing , and Azure-based data platforms including Databricks . Key Responsibilities Perform comprehensive testing of ETL pipelines, ensuring data accuracy and completeness across systems. Validate Data Warehouse (DWH) objects including fact and dimension tables. Design and execute test cases and test plans for data extraction, transformation, and loading processes. Conduct regression testing to validate enhancements and ensure no breakage of existing data flows. Work with SQL to write complex queries for data verification and backend testing. Test data processing workflows in Azure Data Factory and Databricks environments. Collaborate with developers, data engineers, and business analysts to understand requirements and raise defects proactively. Perform root cause analysis for data-related issues and suggest improvements. Create clear and concise test documentation, logs, and reports. Required Technical Skills Strong knowledge of ETL testing methodologies and tools Excellent skills in SQL (joins, aggregation, subqueries, performance tuning) Hands-on experience with Data Warehousing and data models (Star/Snowflake) Experience in test case creation, execution, defect logging, and closure Proficient in regression testing, data validation, data reconciliation Working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks Experience with test management tools like JIRA, TestRail, or HP ALM Nice to Have Exposure to automation testing for data pipelines Scripting knowledge in Python or PySpark Understanding of CI/CD in data testing Experience with data masking, data governance, and privacy rules Qualifications Bachelor’s degree in Computer Science, Information Systems, or related field 3+ years of hands-on experience in ETL/Data Warehouse testing Excellent analytical and problem-solving skills Strong attention to detail and communication skills Skills: regression,azure,data reconciliation,test management tools,data validation,azure databricks,etl testing,data warehousing,dwh,etl pipeline,test case creation,azure data factory,test cases,etl tester,regression testing,sql,databricks

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a Java & Oracle ADF (Application Development Framework) Developer with 5+ years of experience to design, develop, and maintain enterprise applications using Java & Oracle's ADF technology stack and related technologies. Location - Ramanujan IT City, Chennai (Onsite) Contract Duration - 3+ months (Extendable) Immediate Joiner Role and Responsibilities Design and develop enterprise applications using Java & Oracle ADF framework and related technologies Create and maintain ADF Business Components (Entity Objects, View Objects, Application Modules) Develop user interfaces using ADF Faces components and ADF Task Flows Implement business logic and data validation rules using ADF BC Design and develop reports using Jasper Reports Configure and maintain application servers (Tomcat, JBoss) Integrate applications with MySQL databases and web services Handle build and deployment processes Perform code reviews and ensure adherence to coding standards Debug and resolve production issues Collaborate with cross-functional teams including business analysts, QA, and other developers Provide technical documentation and maintain project documentation Core Technical Skills: Strong expertise in Oracle ADF framework (5 years hands-on experience) Proficient in Java/J2EE technologies Advanced knowledge of ADF Business Components (Entity Objects, View Objects, Application Modules) Strong experience with ADF Faces Rich Client components Expertise in ADF Task Flows (bounded and unbounded) Proficient in MySQL database design, optimization, and query writing Strong experience with Jasper Reports for report development and customization Application Server & Build Experience: Experience in deploying and maintaining applications on Tomcat Server Experience with JBoss/WildFly application server configuration and deployment Expertise in build tools (Maven/Ant) and build automation Experience with continuous integration and deployment processes Knowledge of application server clustering and load balancing Database Skills: Strong knowledge of MySQL database administration Experience in writing complex SQL queries and stored procedures Understanding of database optimization and performance tuning Knowledge of database backup and recovery procedures Reporting Skills: Expertise in Jasper Reports design and development Experience in creating complex reports with sub-reports Knowledge of JasperReports Server administration Ability to integrate reports with web applications

Posted 1 month ago

Apply

5.0 years

20 - 24 Lacs

Chennai, Tamil Nadu, India

On-site

Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Skills: data modeling,business intelligence,python,dbt,performene tuning,airflow,informatica,azkaban,luigi,power bi,etl,dwh,fivetran,data quality,snowflake,sql

Posted 1 month ago

Apply

5.0 years

0 Lacs

Delhi, Delhi

Remote

Full time | Work From Office This Position is Currently Open Department / Category: ENGINEER Listed on Jun 30, 2025 Work Location: NEW DELHI BANGALORE HYDERABAD Job Descritpion of Databricks Engineer 6 to 8 Years Relevant Experience We are looking for an experienced Databricks Engineer to join our data engineering team and contribute to designing and implementing scalable data solutions on the Azure platform. This role involves working closely with cross-functional teams to build high-performance data pipelines and maintain a modern Lakehouse architecture. Key Responsibilities: Design and develop scalable data pipelines using Spark-SQL and PySpark in Azure Databricks. Build and maintain Lakehouse architecture using Azure Data Lake Storage (ADLS) and Databricks. Perform comprehensive data preparation tasks, including: Data cleaning and normalization Deduplication Type conversions Collaborate with the DevOps team to deploy and manage solutions in production environments. Partner with Data Science and Business Intelligence teams to share insights, align on best practices, and drive innovation. Support change management through training, communication, and documentation during upgrades, data migrations, and system changes. Required Qualifications: 5+ years of IT experience with strong exposure to cloud technologies, particularly in Microsoft Azure. Hands-on experience with: Databricks, Azure Data Factory (ADF), and Azure Data Lake Storage (ADLS) Programming with PySpark, Python, and SQL Solid understanding of data engineering concepts, data modeling, and data processing frameworks. Ability to work effectively in distributed, remote teams. Excellent communication skills in English (both written and verbal). Preferred Skills: Strong working knowledge of distributed computing frameworks, especially Apache Spark and Databricks. Experience with Delta Lake and Lakehouse architecture principles. Familiarity with data tools and libraries such as Pandas, Spark-SQL, and PySpark. Exposure to on-premise databases such as SQL Server, Oracle, etc. Experience with version control tools (e.g., Git) and DevOps practices including CI/CD pipelines. Required Skills for Databricks Engineer Job Spark-SQL and PySpark in Azure Databricks Python SQL Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies