Jobs
Interviews

8521 Pyspark Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 17.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal Data Engineer What You Will Do Let’s do this. Let’s change the world. Role Description: We are seeking a seasoned Principal Data Engineer to lead the design, development, and implementation of our data strategy. The ideal candidate possesses a deep understanding of data engineering principles, coupled with strong leadership and problem-solving skills. As a Principal Data Engineer, you will architect and oversee the development of robust data platforms, while mentoring and guiding a team of data engineers. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, demonstrating AWS or other preferred platforms. Lead and motivate a strong data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years of experience in Computer Science, IT or related field of experience Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop strong data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Delhi, Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 1 week ago

Apply

2.0 - 4.0 years

25 - 30 Lacs

Pune

Work from Office

Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

1.0 - 4.0 years

25 - 30 Lacs

Thane

Work from Office

Bachelor s or master s degree in computer science, Data Science, Engineering, or a related field. EsyCommerce is seeking a highly experienced Data Engineer to join our growing team in either Mumbai or Pune. This role requires a strong foundation in data engineering principles, coupled with experience in application development and data science techniques. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and applications, as well as leveraging analytical skills to transform data into valuable insights. This position calls for a blend of technical expertise, problem-solving abilities, and effective communication skills to drive data-driven solutions that meet business objectives.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Primary Skills : Strong in Python Programming, Pyspark queries, AWS,GIS, Palantir Foundry PySpark queries --- MUST Experience : 15+ Years Location : Hyderabad (5 days a week Work from Office) Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark • Knowledge on Palantir • Experience using tools such as: Git/Bitbucket, Jenkins/Code Build, Code Pipeline • Experience with platform monitoring and alerts tools

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Nashik Area

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Azure Data Engineer Location: Bengaluru Reporting to: Senior Manager Data Engineering Purpose of the role We are seeking an experienced Data Engineer with over 4 years of expertise in data engineering and a focus on leveraging GenAI solutions. The ideal candidate will have a strong background in Azure services, relational databases, and programming languages, including Python and PySpark. You will play a pivotal role in designing, building, and optimizing scalable data pipelines while integrating AI-driven solutions to enhance our data capabilities. Key tasks & accountabilities Data Pipeline Development: Design and implement efficient ETL/ELT pipelines using Azure Data Factory (ADF) and Azure Databricks (ADB). Ensure high performance and scalability of data pipelines. Relational Database Management: Work with relational databases to structure and query data efficiently. Design, optimize, and maintain database schemas. Programming and Scripting: Write, debug, and optimize Python, PySpark, and SQL code to process large datasets. Develop reusable code components and libraries for data processing. Data Quality and Governance: Implement data validation, cleansing, and monitoring mechanisms. Ensure compliance with data governance policies and best practices. Performance Optimization: Identify and resolve bottlenecks in data processing and storage. Optimize resource utilization on Azure services. Collaboration and Communication: Work closely with cross-functional teams, including AI, analytics, and product teams. Document processes, solutions, and best practices for future use. Qualifications, Experience, Skills Previous Work Experience 4+ years of experience in data engineering. Proficiency in Azure Data Factory (ADF) and Azure Databricks (ADB). Expertise in relational databases and advanced SQL. Strong programming skills in Python and PySpark. Experience with GenAI solutions is a plus. Familiarity with data governance and best practices. Level Of Educational Attainment Required Bachelor's degree in Computer Science, Information Technology, or a related field. Technical Expertise: Knowledge of machine learning pipelines and GenAI workflows. Experience with Azure Synapse or other cloud data platforms. Familiarity with CI/CD pipelines for data workflows. And above all of this, an undying love for beer! We dream big to create future with more cheers.

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Intermediate Azure Developer We’re the obstacle overcomers, the problem get-arounders. From figuring it out to getting it done… our innovative culture demands “yes and how!” We are UPS. We are the United Problem Solvers. About Applications Development At UPS Technology Our technology teams use expertise in applications programming & database technologies to support enterprise infrastructure. They create & support application frameworks & tools. They support deployment of applications & services across a multi-tier environment that processes up to 38 million packages in a single day (4.7 billion annually). This team works closely with our customers to build innovative technologies that are customized to drive business goals & provide the ultimate customer experience. As a member of the applications development family, you will help UPS grow & provide valuable services across the globe. About This Role The Intermediate Azure Developer will analyze business requirements, translating those requirements into Azure specific solutions using the Azure toolsets (Out of the Box, Configuration, Customization). He/She should have the following: experience in designing & building a solution using Azure Declarative & Programmatic Approach, knowledge with Integrating Azure with Salesforce, on premise legacy systems and other cloud solutions, experience with integration middleware and Enterprise Service Bus. He/She should also have experience in Translate design requirements or agile user stories into Azure specific solutions, consuming or sending the message in XML\JSON format to 3rd party using SOAP and REST APIs, expertise in Azure PaaS Service SDKs for .NET, .Net Core, Web API, like Storage, App Insights, Fluent API, Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Logic Apps, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, etc. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. Additional Details Will be working on a global deployment of Azure Platform Management to 40 countries and corresponding languages, 1000 locations and 25,000 users Develop large-scale distributed software services and solutions using Azure technologies. Develop best-in-class engineering services that are well-defined, modularized, secure, reliable, configurable, flexible, diagnosable, actively monitored, and reusable Hands-on with the use of various Azure PaaS Service SDKs for .NET, .Net Core, Web API, like Storage, App Insights, Fluent API, etc. Hands-on experience with Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Logic Apps, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, Databricks, Notebooks, PySpark Scripting etc. Hands-on experience with Azure DevOps building CI/CD, Azure support, Code management branching, etc. Good knowledge of programming and querying SQL Server databases Experience on writing automated test cases and different automated testing frameworks (.NUnit etc.) Ensure comprehensive test coverage to validate the functionality and performance of developed solutions Performs tasks within planned durations and established deadlines. Collaborates with teams to ensure effective communication in supporting the achievement of objectives. Strong Ability to debug and resolve issues/defects Author technical approach and design documentation Collaborate with the offshore team on design discussions and development items Minimum Qualifications Experience in designing & building a solution using Azure Declarative & Programmatic Approach. Experience with integration middleware and Enterprise Service Bus Experience in consuming or sending the message in XML\JSON format to 3rd party using SOAP and REST APIs Hands-on with the use of various Azure PaaS Service SDKs for .NET, .Net Core, SQL, Web API, like Storage, App Insights, Fluent API, etc Preferably 6+ years Development experience Minimum 4+ years of hands-on experience in development with Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Function Apps, Web Jobs, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, Databricks, Notebooks, PySpark Scripting, Runbooks etc. Experience with Azure DevOps building CI/CD, Azure support, Code management branching, Jenkins, Kubernetes, etc. Good knowledge of programming and querying SQL Server databases Experience on writing automated test cases and different automated testing frameworks (.NUnit etc.) Experience with Agile Development Must be detail oriented. Self-Motivated Learner Ability to collaborate with others. Excellent written and verbal communication skills Bachelor's degree and/or Master's degree in Computer Science or related discipline or the equivalent in education and work experience Azure Certifications Azure Fundamentals (mandatory) Azure Administrator Associate (desired) Azure Developer Associate (mandatory) This position offers an exceptional opportunity to work for a Fortune 50 industry leader. If you are selected, you will join our dynamic technology team in making a difference to our business and customers. Do you think you have what it takes? Prove it! At UPS, ambition knows no time zone. Basic Qualifications If required and where permitted by applicable law, employees must be fully vaccinated for COVID-19 by their date of hire/placement to be considered for employment. Fully vaccinated means two weeks after receiving the second shot for Pfizer and Moderna, or two weeks after Johnson & Johnson Other Criteria UPS is an equal opportunity employer. UPS does not discriminate on the basis of race/color/religion/sex/national origin/veteran/disability/age/sexual orientation/gender identity or any other characteristic protected by law. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Job Title: Intermediate Data Developer – Azure ADF and Databricks Experience Range: 5-7 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About The Role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development: Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About The Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: GCP Data Engineer 34306 Job Type: Full-Time Work Mode: Hybrid Location: Chennai Budget: ₹18–20 LPA Notice Period: Immediate Joiners Preferred Role Overview We are seeking a proactive Full Stack Data Engineer with a strong focus on Google Cloud Platform (GCP) and data engineering tools. The ideal candidate will contribute to building analytics products supporting supply chain insights and will be responsible for developing cloud-based data pipelines, APIs, and user interfaces. The role demands high standards of software engineering, agile practices like Test-Driven Development (TDD), and experience in modern data architectures. Key Responsibilities Design, build, and deploy scalable data pipelines and analytics platforms using GCP tools like BigQuery, Dataflow, Dataproc, Data Fusion, and Cloud SQL. Implement and maintain Infrastructure as Code (IaC) using Terraform and CI/CD pipelines using Tekton. Develop robust APIs using Python, Java, and Spring Boot, and deliver frontend interfaces using Angular, React, or Vue. Build and support data integration workflows using Airflow, PySpark, and PostgreSQL. Collaborate with cross-functional teams in an Agile environment, leveraging Jira, paired programming, and TDD. Ensure cloud deployments are secure, scalable, and performant on GCP. Mentor team members and promote continuous learning, clean code practices, and Agile principles. Mandatory Skills GCP services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL Programming: Python, Java, Spring Boot Frontend: Angular, React, Vue, TypeScript, JavaScript Data Orchestration: Airflow, PySpark DevOps/CI-CD: Terraform, Tekton, Jenkins Databases: PostgreSQL, Cloud SQL, NoSQL API development and integration Experience 5+ years in software/data engineering Minimum 1 year in GCP-based deployment and cloud architecture Education Bachelor’s or Master’s in Computer Science, Engineering, or related technical discipline Desired Traits Passion for clean, maintainable code Strong problem-solving skills Agile mindset with an eagerness to mentor and collaborate Skills: typescript,data fusion,terraform,java,spring boot,dataflow,data integration,cloud sql,javascript,bigquery,react,postgresql,nosql,vue,data,pyspark,dataproc,sql,cloud,angular,python,tekton,api development,gcp services,jenkins,airflow,gcp

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Zensar Technologies, is hiring for Azure data Engineer for Hyderabad. If you're passionate about Azure Databricks, Pyspark, and Synapse, this could be a great fit! Azure Databricks and Hands on Pyspark with tuning Azure Data Factory pipelines for various data loading into ADB, perf tuning Azure Synapse Azure Monitoring and Log Analytics (error handling in ADF pipelines and ADB) Logic Apps and Functions Performance Tuning Databricks, Data factory and Synapse Databricks data loading (layers) and Export (which connection options, which best approach for report and access for fast) If you're interested or know someone who might be, please share your updated resume with me at - Divyanka.kumari2@zensar.com

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 8 Lacs

Cochin

On-site

5 - 7 Years 1 Opening Kochi Role description The Snowflake Developer will play a crucial role in designing, developing, and implementing data solutions using Snowflake's cloud-based data platform. The developer will be responsible for writing efficient procedures with Spark or SQL to facilitate data processing, transformation, and analysis. Python/Pyspark and SQL has to be strong, should have some experience about data pipelines or other data engineering aspects Knowledge on AWS platform- He/she must have interest on upskilling and eager to learn, should have right attitude towards learning. Good expertise in SDLC/Agile Experience in SQL, complex queries, and optimization Experience in Spark ecosystem, and familiarity with MongoDB data loads, Snowflake and AWS platform (EMR, Glue, S3 Hands on experience in writing advanced SQL queries, familiarity with variety of databases Experience in handling end to end data testing for complex big data projects, which includes extensive experience in writing and executing test cases, performing data validations, system testing and performance checks Skills Snowflake development, Python, Pyspark, AWS About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

10.0 - 14.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Face to face interview on 2nd august 2025 in Hyderabad Apply here - Job description - https://careers.ey.com/job-invite/1604461/ Experience Required: Minimum 8 years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills.

Posted 1 week ago

Apply

12.0 - 17.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal Data Engineer What you will do Let’s do this. Let’s change the world. Role Description: We are seeking a seasoned Principal Data Engineer to lead the design, development, and implementation of our data strategy. The ideal candidate possesses a deep understanding of data engineering principles, coupled with strong leadership and problem-solving skills. As a Principal Data Engineer, you will architect and oversee the development of robust data platforms, while mentoring and guiding a team of data engineers. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and standard methodologies. Design, develop, and implement robust data architectures and platforms to support business objectives. Oversee the development and optimization of data pipelines, and data integration solutions. Establish and maintain data governance policies and standards to ensure data quality, security, and compliance. Architect and manage cloud-based data solutions, demonstrating AWS or other preferred platforms. Lead and motivate a strong data engineering team to deliver exceptional results. Identify, analyze, and resolve complex data-related challenges. Collaborate closely with business collaborators to understand data requirements and translate them into technical solutions. Stay abreast of emerging data technologies and explore opportunities for innovation. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years of experience in Computer Science, IT or related field of experience Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Proficient on experience in Python, PySpark, SQL. Handon experience with bid data ETL performance tuning. Proven ability to lead and develop strong data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

6.0 years

5 - 9 Lacs

Hyderābād

Remote

Technical Lead – Big Data & Python skillset As a Technical Lead, you will be responsible as a strong full stack developer and individual contributor responsible to design application modules and deliver from the technical standpoint. High level of skills in coming up with high level design working with the architect and lead in module implementations technically. Must be a strong developer and ability to innovative. Should be a go to person on the assigned modules, applications/ projects and initiatives. Maintains appropriate certifications and applies respective skills on project engagements. Work you’ll do A unique opportunity to be a part of growing Delivery, methods & Tools team that drives consistency, quality, and efficiency of the services delivered to stakeholders. Responsibilities: Full stack hands on developer and strong individual contributor. Go-to person on the assigned projects. Able to understand and implement the project as per the proposed Architecture. Implements best Design Principles and Patterns. Understands and implements the security aspects of the application. Knows ADO and is familiar with using ADO. Obtains/maintains appropriate certifications and applies respective skills on project engagements. Leads or contributes significantly to Practice. Estimates and prioritizes Product Backlogs. Defines work items. Works on unit test automation. Recommend improvements to existing software programs as deemed necessary. Go-to person in the team for any technical issues. Conduct Peer Reviews Conducts Tech sessions within Team. Provides input to standards and guidelines. Implements best practices to enable consistency across all projects. Participate in the continuous improvement processes, as assigned. Mentors and coaches Juniors in the Team. Contributes to POCs. Supports the QA team with clarifications/ doubts. Takes ownership of the deployment, Tollgate, and deployment activities. Oversees the development of documentation. Participates in regular work, status communications and stakeholder updates. Supports development of intellectual capital. Contributes to knowledge network. Acts as a technical escalation point. Conducts sprint review. Does code Optimization and suggests team on the best practices. Skills: Education qualification : BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9 years of IT experience in application development , support or maintenance activities 2+ years of experience in team management. Must have in-depth knowledge of software development lifecycles including agile development and testing. Enterprise Data Management framework , data security & Compliance( optional). o Data Ingestion, Storage n Transformation o Data Auditing n Validation ( optional) o Data Visualization with Power BI ( optional) o Data Analytics systems ( optional) o Scaling and Handling large data sets. Designing & Building Data Services using At least 2+ years’ in : Azure SQL DB , SQL Wearhouse, ADF , Azure Storage, ADO CI/CD, Azure Synapse Data Model Design Data Entities : modeling and depiction. Metadata Mgmt( optional). Database development patterns n practices : SQL / NoSQL ( Relation / Non-Relational – native JSON) , flexi schema, indexing practices, Master / child model data mgmt, Columnar , Row API / SDK for No SQL DBs Ops & Mgmt. Design and Implementation of Data warehouse, Azure Synapse, Data Lake, Delta lake Apace Spark Mgmt Programming Languages PySpark / Python , C#( optional) API : Invoke / Request n Response PowerShell with Azure CLI ( optional) Git with ADO Repo Mgmt, Branching Strategies Version control Mgmt Rebasing, filtering , cloning , merging Debugging & Perf Tuning n Optimization skills : Ability to analyze PySpark code, PL/SQL, . Enhancing response times GC Mgmt Debugging and Logging n Alerting techniques. Prior experience that demonstrates good business understanding is needed (experience in a professional services organization is a plus). Excellent written and verbal communications, organization, analytical, planning and leadership skills. Strong management, communication, technical and remote collaboration skill are a must. Experience in dealing with multiple projects and cross-functional teams, and ability to coordinate across teams in a large matrix organization environment. Ability to effectively conduct technical discussions directly with Project/Product management, and clients. Excellent team collaboration skills. Education & Experience: Education qualification: BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9 years of Domain experience or other relevant industry experience. 2+ years of Product owner or Business Analyst or System Analysis experience. Minimum 3+ years of Software development experience in .NET projects. 3+ years of experiencing in Agile / scrum methodology Work timings: 9am-4pm, 7pm- 9pm Location: Hyderabad Experience: 6-9 yrs The team At Deloitte, Shared Services center improves overall efficiency and control while giving every business unit access to the company’s best and brightest resources. It is also lets business units focus on what really matters – satisfying customers and developing new products and services to sustain competitive advantage. A shared services center is a simple concept, but making it work is anything but easy. It involves consolidating and standardizing a wildly diverse collection of systems, processes, and functions. And if requires a high degree of cooperation among business units that generally are not accustomed to working together – with people who do not necessarily want to change. USI shared services team provides a wide array of services to the U.S. and it is constantly evaluating and expanding its portfolio. The shared services team provides call center support, Document Services support, financial processing and analysis support, Record management support, Ethics and compliance support and admin assistant support. How you’ll grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #CAP-PD Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300914

Posted 1 week ago

Apply

8.0 years

3 - 7 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design and develop scalable systems for processing unstructured data into actionable insights using Python, Flask, and Azure Cognitive Services Integrate Optical Character Recognition (OCR), Speech-to-Text, and NLP models into workflows to handle various file formats such as PDFs, images, audio files, and text documents Implement robust error-handling mechanisms, multithreaded architectures, and RESTful APIs to ensure seamless user experiences. Utilize Azure OpenAI, Azure Speech SDK, and Azure Form Recognizer to create AI-powered solutions tailored to meet complex business requirements Collaborate with cross-functional teams to drive innovation and implement analytics workflows and ML models to enhance business processes and decision-making Ensure the accuracy, efficiency, and scalability of systems focusing on healthcare claims processing, document digitization, and data extraction Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 8+ years of relevant experience in AI/ML engineering and cognitive automation Proven experience as an AI/ML Engineer, Software Engineer, Data Analyst, or a similar role in the tech industry Extensive experience with Azure Cognitive Services and other AI technologies SQL, Python, PySpark, Scala experience Proficient in developing and deploying machine learning models and handling large data sets Proven solid programming skills in Python and familiarity with Flask web framework Proven excellent problem-solving skills and the ability to work in a fast-paced environment Proven solid communication and collaboration skills, capable of working effectively with cross-functional teams. Demonstrated ability to implement robust ETL or ELT workflows for structured and unstructured data ingestion, transformation, and storage Preferred Qualification: Experience in healthcare industries Skills: Python Programming and SQL Data Analytics and Machine Learning Classification and Unsupervised Learning Regression and NLP Cloud and DevOps Foundations Data Visualization and Reporting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

2.0 - 4.0 years

5 - 9 Lacs

Gurgaon

On-site

Lead Assistant Manager EXL/LAM/1431542 ServicesGurgaon Posted On 25 Jul 2025 End Date 08 Sep 2025 Required Experience 2 - 4 Years Basic Section Number Of Positions 5 Band B2 Band Name Lead Assistant Manager Cost Code D014365 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1200000.0000 - 1500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Banking & Financial Services SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill SQL PYTHON DATA ANALYTICS Minimum Qualification B.TECH/B.E Certification No data available Job Description About the Role: We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities: Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools and business metrics. Translate complex analytical findings into business-friendly insights and decision making Develop and automate dashboards and reports using Power BI/Tableau to provide clear, actionable insights to operations and management teams. Required Skills & Qualifications: Education: Bachelor's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related field. Master's degree is a plus. Experience: 2–4 years of hands-on experience in a data or business analytics role, preferably within the BFSI domain Technical Skills: Strong proficiency in SQL & Python Solid understanding of analytical techniques and problem-solving skills. Business Acumen: Understanding of Banking & financial Sector (Preferred) Bonus Skills : PySpark, Machine Learning, Tableau/Power BI Workflow Workflow Type L&S-DA-Consulting

Posted 1 week ago

Apply

4.0 - 6.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1431614 ServicesGurgaon Posted On 25 Jul 2025 End Date 08 Sep 2025 Required Experience 4 - 6 Years Basic Section Number Of Positions 5 Band C1 Band Name Manager Cost Code D014365 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1600000.0000 - 2200000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Banking & Financial Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill SQL PYTHON Minimum Qualification B.TECH/B.E Certification No data available Job Description About the Role: We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities: Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools and business metrics. Translate complex analytical findings into business-friendly insights and decision making Develop and automate dashboards and reports using Power BI/Tableau to provide clear, actionable insights to operations and management teams. Required Skills & Qualifications: Education: Bachelor's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related field. Master's degree is a plus. Experience: 4–6 years of hands-on experience in a data or business analytics role, preferably within the BFSI domain Technical Skills: Strong proficiency in SQL & Python Solid understanding of analytical techniques and problem-solving skills. Business Acumen: Understanding of Banking & financial Sector (Preferred) Bonus Skills : PySpark, Machine Learning, Tableau/Power BI Workflow Workflow Type L&S-DA-Consulting

Posted 1 week ago

Apply

1.0 - 3.0 years

5 - 9 Lacs

Gurgaon

On-site

Assistant Manager EXL/AM/1431601 ServicesGurgaon Posted On 25 Jul 2025 End Date 08 Sep 2025 Required Experience 1 - 3 Years Basic Section Number Of Positions 5 Band B1 Band Name Assistant Manager Cost Code D014365 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 700000.0000 - 1200000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Banking & Financial Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill SQL PYTHON Minimum Qualification B.TECH/B.E Certification No data available Job Description About the Role: We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities: Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools and business metrics. Translate complex analytical findings into business-friendly insights and decision making Develop and automate dashboards and reports using Power BI/Tableau to provide clear, actionable insights to operations and management teams. Required Skills & Qualifications: Education: Bachelor's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related field. Master's degree is a plus. Experience: 1–3 years of hands-on experience in a data or business analytics role, preferably within the BFSI domain Technical Skills: Strong proficiency in SQL & Python Solid understanding of analytical techniques and problem-solving skills. Business Acumen: Understanding of Banking & financial Sector (Preferred) Bonus Skills : PySpark, Machine Learning, Tableau/Power BI Workflow Workflow Type L&S-DA-Consulting

Posted 1 week ago

Apply

5.0 years

19 - 20 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp

Posted 1 week ago

Apply

5.0 years

6 - 8 Lacs

Chennai

On-site

As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP’s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the company's technical direction while maintaining a customer-centric approach.

Posted 1 week ago

Apply

6.0 years

3 - 6 Lacs

Chennai

On-site

At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Your future duties and responsibilities Your future duties and responsibilities Job Title:Python & PySpark/Spark Developer Position:Python & PySpark/Spark Developer Experience:5+yrs Category: Softare Development Main location: Chennai/Bangalore Position ID: J0625-0234 Employment Type: Full Time Qualification : Bachelor of Engineering Your future duties and responsibilities Position: Python & PySpark/Spark Developer Experience: 6-8 years Location: Chennai(Preferred), Bangalore Shift: UK Shift Job Overview: Capital Markets Technology, Rates IT group is seeking an experienced Software Developer to work on a Risk Services platform supporting the Interest Rates, Structured and Resource Management trading desks. The platform stores risk analytics generated by a proprietary valuation engine and makes them available through a variety of interfaces to Traders, Risk managers, Finance, and others. The system also generates time-sensitive reports for financial and regulatory reporting. What will you do? Work as a member of a global team to build Technology solutions used across the Rates and Resource Management Trading businesses. Design, develop, and maintain reusable Java components for data loading, extracts and transformations. Lead project streams within the group, and mentor others on the team. Participate in requirements gathering and meetings with business stakeholders and other technology groups to produce analysis of the Use Cases and Solutions Designs. Provide second level of support for a Business-critical system Must Have: Strong technical developer with 7+ years hands on experience 4+ years application development experience in Python & PySpark/Spark. 4+ years of experience working on OO principles. Ability to write SQL Queries. Ability to write bash shell scripts. Ability to learn& adapt. Ability to communicate in clear & concise way. Experience in writing Unit test cases & perform thorough unit testing. Experience programming with Spring Boot, Java 8 Experience and Knowledge on Spark Framework, Experience programming in Java/ python and pySpark Familiarity with CI/CD pipelines and frameworks such as Git, Jenkins, maven / ansible etc CI/CD concepts. Unix/Linux basics. REST API basics. Nice to have: Experience in Capital Markets Experience with Spark and HDFS strongly desired Experience with in-memory databases Experience in Agile delivery using Jira Knowledge of Interest/Credit Derivative products, and related trade risk management and/or valuations. Required qualifications to be successful in this role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

0 years

2 - 3 Lacs

Chennai

On-site

Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations. > Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. > Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions. > Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows. > Write clean, functional, and optimized code: Adhering to coding standards and best practices. > Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications. > Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes. > Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies. Proficiency in: Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala). Spark (Scala, Python) for data processing and analysis. Kafka for real-time data ingestion and processing. ETL processes and data ingestion tools Deep hands-on expertise in Pyspark, Scala, Kafka Programming Languages: Scala, Python, or Java for developing Spark applications. SQL for data querying and analysis. Other Skills: Data warehousing concepts. Linux/Unix operating systems. Problem-solving and analytical skills. Version control systems - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai

On-site

Mandatory Skills: 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description: We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities: Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai

On-site

7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies