Jobs
Interviews

53 Adls Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Tiger Analytics is a global AI and analytics consulting firm that is at the forefront of solving complex problems using data and technology. With a team of over 2800 experts spread across the globe, we are dedicated to making a positive impact on the lives of millions worldwide. Our culture is built on expertise, respect, and collaboration, with a focus on teamwork. While our headquarters are in Silicon Valley, we have delivery centers and offices in various cities in India, the US, UK, Canada, and Singapore, as well as a significant remote workforce. As an Azure Big Data Engineer at Tiger Analytics, you will be part of a dynamic team that is driving an AI revolution. Your typical day will involve working on a variety of analytics solutions and platforms, including data lakes, modern data platforms, and data fabric solutions using Open Source, Big Data, and Cloud technologies on Microsoft Azure. Your responsibilities may include designing and building scalable data ingestion pipelines, executing high-performance data processing, orchestrating pipelines, designing exception handling mechanisms, and collaborating with cross-functional teams to bring analytical solutions to life. To excel in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be well-versed in technologies such as Azure Data Factory, PySpark, Databricks, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Your passion for writing high-quality, scalable code and your ability to collaborate effectively with stakeholders are essential for success in this role. Experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, and Neo4J, as well as knowledge of different file formats and REST API design, will be advantageous. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with varying skills and backgrounds to apply. We are committed to providing equal opportunities for all our employees and fostering a culture of trust, respect, and growth. Your compensation package will be competitive and aligned with your expertise and experience. If you are looking to be part of a forward-thinking team that is pushing the boundaries of what is possible in AI and analytics, we invite you to join us at Tiger Analytics and be a part of our exciting journey towards building innovative solutions that inspire and energize.,

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Senior Manager specializing in Data Analytics & AI, you will be a pivotal member of the EY Data, Analytics & AI Ireland team. Your role as a Databricks Platform Architect will involve enabling clients to extract significant value from their information assets through innovative data analytics solutions. You will have the opportunity to work across various industries, collaborating with diverse teams and leading the design and implementation of data architecture strategies aligned with client goals. Your key responsibilities will include leading teams with varying skill sets in utilizing different Data and Analytics technologies, adapting your leadership style to meet client needs, creating a positive learning culture, engaging with clients to understand their data requirements, and developing data artefacts based on industry best practices. Additionally, you will assess existing data architectures, develop data migration strategies, and ensure data integrity and minimal disruption during migration activities. To qualify for this role, you must possess a strong academic background in computer science or related fields, along with at least 7 years of experience as a Data Architect or similar role in a consulting environment. Hands-on experience with cloud services, data modeling techniques, data management concepts, Python, Spark, Docker, Kubernetes, and cloud security controls is essential. Ideally, you will have the ability to effectively communicate technical concepts to non-technical stakeholders, lead the design and optimization of the Databricks platform, work closely with the data engineering team, maintain a comprehensive understanding of the data pipeline, and stay updated on new and emerging technologies in the field. EY offers a competitive remuneration package, flexible working options, career development opportunities, and a comprehensive Total Rewards package. Additionally, you will benefit from support, coaching, opportunities for skill development, and a diverse and inclusive culture that values individual contributions. If you are passionate about leveraging data to solve complex problems, drive business outcomes, and contribute to a better working world, consider joining EY as a Databricks Platform Architect. Apply now to be part of a dynamic team dedicated to innovation and excellence.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Genpact is a global professional services and solutions firm focused on delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Databricks Developer - AWS. As a Databricks Developer in this role, you will be responsible for solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Stay updated on new and emerging technologies and explore their potential applications for service offerings and products. - Collaborate with architects and lead engineers to design solutions that meet functional and non-functional requirements. - Demonstrate knowledge of relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess excellent coding skills, particularly in Python or Scala, with a preference for Python. Qualifications: Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - Stay informed about new technologies and their potential applications. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Proficient in Python or Scala coding. - Experience in the Data Engineering domain. - Completed at least 2 end-to-end projects in Databricks. Additional qualifications: - Familiarity with Delta Lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation in enterprise environments. - Ability to create complex data pipelines. - Strong knowledge of Data structures & algorithms. - Proficiency in SQL and Spark-SQL. - Experience in performance optimization to enhance efficiency and reduce costs. - Worked on both Batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Experience with cloud platforms (Azure, AWS, GCP) and common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. - Skilled in writing unit and integration test cases. - Excellent communication skills and experience working in teams of 5 or more. - Positive attitude towards learning new skills and upskilling. - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience in CI/CD to build pipelines for Databricks jobs. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. This is a full-time position based in India-Gurugram. The job posting was on August 5, 2024, and the unposting date is set for October 4, 2024.,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Critical Skills to Have: Five or more years of experience in the field of information technology Has a general understanding of several software platforms and development technologies Has experience with SQL, RDBMS, Data Lakes, and Warehouses Knowledge of the Hadoop ecosystem, Azure, ADLS, Kafka, Apache Delta, and Databricks/Spark. Possessing knowledge of any data modeling tool, such as ERStudio or Erwin, is advantageous. Collaboration history with Product Managers, Technology teams, and Business Partners Strong familiarity with Agile and DevOps techniques Excellent communication skills both in writing and speaking

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, Azure Data Engineer Responsibilities Strong knowledge in building Pipelines in Azure Data Factory or Azure Synapse Analytics. Knowledge in Azure Data Bricks and Azure Synapse Analytics for ingesting data through different sources Good at writing SQL Queries on SQL Database and SQL DWH. Knowledge in design, development, testing , implementation of Azure Data Stack technologies. Expert level knowledge of SQL DB & Data warehouse. Knowledge of Azure Data Lake (Blob and ADLS) is mandatory. Should be able to do perform querying in SQL Database and SQL DWH. Knowledge of Azure Data Lake is required . Should be strong in either Python or Scala Programming Languages. Experience in various ETL techniques and frameworks. Ability to both work in team and to deliver and accept peer review. Understanding Machine Learning Algorithms and Power BI is an added advantage. Experience in GenAI project Qualifications we seek in you! Minimum qualifications Graduate Preferred qualifications Personal drive and positive work ethic to deliver results within deadlines and in demanding situations. Flexibility to adapt to a variety of engagement types, working hours and work environments and locations. Excellent communication skills. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

25 - 35 Lacs

Bengaluru

Hybrid

We are hiring Azure Data Engineers for an active project-Bangalore location Interested candidates can share details on the mail with their updated resume. Total Exp? Rel exp in Azure Data Engineering? Current organization? Current location? Current fixed salary? Expected Salary? Do you have any offers? if yes mention the offer you have and reason for looking for more opportunity? Open to relocate Bangalore? Notice period? if serving/not working, mention your LWD? Do you have PF account ?

Posted 3 weeks ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Role & responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred candidate profile Primary skills: Technology->Cloud Platform->Azure Analytics Services->Azure Data Lake Azure Data Lake (ADLS) Developer/ Engineer

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Mumbai, Maharashtra, India

On-site

Key Responsibilities: IoT System Monitoring & Support Provide Level 2 & 3 support for IoT solutions deployed in industrial environments. Monitor Edge and Cloud-based IoT systems for performance, connectivity, and reliability issues. Develop real-time alerting mechanisms for IoT device failures, connectivity drops, or performance degradation. Troubleshooting & Issue Resolution Investigate and resolve issues related to IoT protocols (MQTT, HTTP, AMQP, OPC-UA, etc.). Debug OPC-UA tags and configurations for proper data transmission. Analyse logs from MQTT brokers (NanoMQ, EQMX MQTT, Mosquitto) and ensure message integrity. Work with Docker-based containerized workloads and troubleshoot deployment issues. Edge-to-Cloud Connectivity & Maintenance Ensure stable Edge-to-Cloud connectivity using Azure IoT Hub, Azure Event Hub, and ADLS. Support Azure-based IoT deployments, including Azure IoT Edge, Azure IIoT Framework, and Azure IoT Central. Maintain K3S or AKS (Kubernetes) clusters used for IoT edge deployments. CI/CD & DevOps Support Manage and troubleshoot CI/CD pipelines for IoT deployments in Azure DevOps. Maintain version control, deployments, and container registries (Azure Container Registry). Debug helm-based deployments and Kubernetes configurations. Documentation & Collaboration Create and maintain technical documentation for IoT architectures, troubleshooting steps, and best practices. Work closely with IoT developers, DevOps engineers, and manufacturing teams to ensure smooth IoT system operations. Train end-users and support teams on IoT monitoring tools and incident handling. Mandatory/Required Skills: Experience in IoT Support - 4+ years of experience supporting industrial IoT solutions in a production environment. Strong Troubleshooting Skills - Expertise in diagnosing and resolving issues in Edge-to-Cloud architectures. IoT & IIoT Knowledge - Hands-on experience with IoT protocols (MQTT, OPC-UA, HTTP, AMQP). MQTT Brokers - Experience working with NanoMQ, EQMX MQTT, Mosquitto. Python & Scripting - Strong Python scripting skills for debugging and automating IoT operations. Containerization - Hands-on experience with Docker, building images, deploying containers. Azure IoT Services - Experience with Azure IoT Hub, Azure Event Hub, ADLS, Azure Data Explorer. DevOps & CI/CD - Experience with Azure DevOps, CI/CD pipelines, Kubernetes (K3S or AKS). Monitoring & Alerting - Familiarity with monitoring IoT device health and cloud infrastructure. Good to Have Qualifications: Experience with Neuron, ekuiper for IoT data processing. Working experience with OPC UA Servers or Prosys simulators. Hands-on experience with Azure IoT Edge, Azure IoT Central, Azure Arc, and Azure Edge Essentials. Familiarity with Rancher for Kubernetes cluster management. Experience in Manufacturing or Industrial IoT environments.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI . Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Work from Office

We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Excellent communication and presentation skills. Extensive experience in Azure stack Azure Data bricks, Azure Synapse, ADLS, Azure SQL DB, Azure Data Factory, CosmoDB, Analysis Services, Event Hub etc.. Excellent experience in data processing using Azure Data bricks, complex data transformation using Pyspark or Python and building end to end data pipeline using Azure Data bricks Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management

Posted 1 month ago

Apply

5.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Azure backend expert (ADLS, ADF and Azure SQL DW)4+Yrs/Immediate Joiners only One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models. Skills: adf,sql,adls,azure,azure sql dw

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Responsibilities Create and manage scalable data pipelines to collect, process, and store large volumes of data from various sources Integrate data from multiple sources, ensuring consistency, quality, and reliability Design, implement, and optimize database schemas and structures to support data storage and retrieval Develop and maintain ETL (Extract, Transform, Load) processes to accurately and efficiently move data between systems Build and maintain data warehouses to support business intelligence and analytics needs Optimize data processing and storage performance for efficient resource utilization and quick retrieval Create and maintain comprehensive documentation for data pipelines, ETL processes, and database schemas Monitor data pipelines and systems for performance and reliability, troubleshooting and resolving issues as they arise Stay up to date with emerging technologies and best practices in data engineering, evaluating and recommending new tools as appropriate Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (Engineering or Math preferred) 5+ years of experience with SQL, Python, .NET, SSIS, and SSAS 2+ years of experience with Azure cloud services, particularly SQL Server, ADF, Azure Databricks, ADLS, Key Vault, Azure Functions, and Logic Apps, with an emphasis on Databricks 2+ years of experience using Git and deploying code using a CI/CD approach Strong analytical and problem-solving skills Excellent communication and interpersonal skills Ability to work independently and as part of a team Attention to detail and a commitment to quality

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Architect and develop secure REST APIs in C# to support advanced attribution models and marketing analytics pipelines. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with C# and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with Fabric and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Hybrid

We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected, and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience in Python and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Proficiency with Azure Cloud technologies , Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .

Posted 1 month ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Inviting Application for Azure Data Engineer Experience - 5 to 8 Yrs Joining Location - Chennai JD- Required Technical Skill Set - ADB,ADF 3+ years of relevant experience in Pyspark and Azure Databricks. Proficiency in integrating, transforming, and consolidating data from various structured and unstructured data sources. Good experience in SQL or native SQL query languages. Strong experience in implementing Databricks notebooks using Python. Good experience in Azure Data Factory, ADLS, Storage Services, Serverless architecture, Azure functions.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 16 Lacs

Bangalore Rural, Bengaluru

Work from Office

Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).

Posted 1 month ago

Apply

6.0 - 8.0 years

7 - 11 Lacs

Gurugram

Work from Office

DISCOVER your opportunity What will your essential responsibilities include? Possess excellent domain knowledge of Data warehousing technologies, SQL, Data Models to develop test strategies, approaches from Quality Engineering perspective. In close coordination with Project teams help lead all efforts from Quality Engineering perspective. Work with data engineers or data scientists to collect and prepare the necessary test data sets. Ensure the data adequately represents real-world scenarios and covers a diverse range of inputs. Excellent domain knowledge of Data warehousing technologies, SQL, Data Models to build out test strategies and lead projects from Quality Engineering perspective. With an Automation-first mindset, work towards testing of user interfaces such as Business Intelligence solutions and validation of functionalities while constantly looking out for efficiency gains and process improvements. Triage and Prioritization of stories and epics with all stakeholders to ensure optimal deliveries. Engage with various stakeholders like Business Partners, Product Owners, Development and Infrastructure teams to ensure alignments with overall roadmap. Track current progress of testing activities, finding and tracking test metrics, estimating and communicating improvement actions based on the test metrics results and the experience. Automation for processes such as Data Loads, user interfaces such as Business Intelligence solutions and other validations of business KPIs. Adopt and implement best practices towards Documentation of test plan, cases, results in JIRA. Triage and Prioritization of defects with all stakeholders. Leadership accountability for ensuring that every release to customers is fit for purpose, performant. Knowledge on Scaled Agile, Scrum or Kanban methodology. You will report to Lead UAT. SHARE your talent Were looking for someone who has these abilities and skills: Required Skills and Abilities: A minimum of a bachelors or master's degree (preferred) in a relevant discipline Relevant years of excellent testing background, including knowledge/experience in automation. Insurance experience in data, underwriting, claims or operations, including influencing, collaborating, and leading efforts in complex, disparate, and interrelated teams. Excellent Experience with SQL Server, Azure Databricks Notebook, PowerBI, ADLS, CosmosDB, SQL DW Analytics. Should have a robust background in Software development with experience in ingesting, transforming, and storing data from large datasets using Pyspark in Azure Databricks with robust knowledge of distributed computing concepts. Hands-on experience in designing and developing ETL Pipelines in Pyspark in Azure Databricks with robust python scripting. Desired Skills and Abilities: Having experience doing UAT/System Integration testing in the insurance industry Excellent technical testing experience such as API testing, UI automation is a plus. Knowledge/Experience of Testing in cloud-based systems in different data staging layers

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies