Jobs
Interviews

8325 Pyspark Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a skilled individual, you will be responsible for designing, developing, and implementing data pipelines using Azure Data Factory. Your primary objective will be to efficiently extract, transform, and load data from diverse sources into Azure Data Lake Storage (ADLS). Additionally, you may have the opportunity to work with Azure Databricks, Python, and PySpark to enhance your capabilities in this role.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With over 125,000 employees spread across more than 30 countries, we are characterized by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, which is the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, by leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist, specializing in Azure Generative AI & Advanced Analytics. In this role, we are seeking a highly skilled and experienced Data Scientist with hands-on expertise in Azure Generative AI, Document Intelligence, Agentic AI, and Advanced Data Pipelines. Your responsibilities will include developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. You will play a crucial role in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines for processing and analyzing large-scale datasets efficiently. - Implement Agentic AI techniques to develop intelligent, autonomous systems that can make decisions and take actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed. - Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones. - Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Stay up to date with the latest advancements in AI, ML, and data science, and apply best practices to enhance business operations. Qualifications: Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Relevant experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Strong proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is a plus. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. If you are passionate about leveraging your skills and expertise to drive AI-driven insights and automation in a dynamic environment, we invite you to apply for the role of Principal Consultant - Data Scientist at Genpact. Join us in shaping the future and creating lasting value for our clients.,

Posted 1 day ago

Apply

10.0 - 12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

JR: R00208204 Experience: 10-12Years Educational Qualification: Any Degree Job Title - S&C- Data and AI - CFO&EV – Quantexa Platform(Assoc Manager) Management Level: 8-Associate Manager Location: Pune, PDC2C Must-have skills: Quantexa Platform Good to have skills: Experience in financial modeling, valuation techniques, and deal structuring. Job Summary: This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHAT’S IN IT FOR YOU? Accenture CFO & EV team under Data & AI team has comprehensive suite of capabilities in Risk, Fraud, Financial crime, and Finance. Within risk realm, our focus revolves around the model development, model validation, and auditing of models. Additionally, our work extends to ongoing performance evaluation, vigilant monitoring, meticulous governance, and thorough documentation of models. Get to work with top financial clients globally Access resources enabling you to utilize cutting-edge technologies, fostering innovation with the world’s most recognizable companies. Accenture will continually invest in your learning and growth and will support you in expanding your knowledge. You’ll be part of a diverse and vibrant team collaborating with talented individuals from various backgrounds and disciplines continually pushing the boundaries of business capabilities, fostering an environment of innovation. What You Would Do In This Role Engagement Execution Lead client engagements that may involve model development, validation, governance, strategy, transformation, implementation and end-to-end delivery of fraud analytics/management solutions for Accenture’s clients. Advise clients on a wide range of Fraud Management/ Analytics initiatives. Projects may involve Fraud Management advisory work for CXOs, etc. to achieve a variety of business and operational outcomes. Develop and frame Proof of Concept for key clients, where applicable Practice Enablement Mentor, groom and counsel analysts and consultants. Support development of the Practice by driving innovations, initiatives. Develop thought capital and disseminate information around current and emerging trends in Fraud Analytics and Management Support efforts of sales team to identify and win potential opportunities by assisting with RFPs, RFI. Assist in designing POVs, GTM collateral. Travel: Willingness to travel up to 40% of the time Professional Development Skills: Project Dependent Professional & Technical Skills: Relevant experience in the required domain. Strong analytical, problem-solving, and communication skills. Ability to work in a fast-paced, dynamic environment. Advanced skills in development and validation of fraud analytics models, strategies, visualizations. Understanding of new/ evolving methodologies/tools/technologies in the Fraud management space. Expertise in one or more domain/industry including regulations, frameworks etc. Experience in building models using AI/ML methodologies Modeling: Experience in one or more of analytical tools such as SAS, R, Python, SQL, etc. Knowledge of data processes, ETL and tools/ vendor products such as VISA AA, FICO Falcon, EWS, RSA, IBM Trusteer, SAS AML, Quantexa, Ripjar, Actimize etc. Proven experience in one of data engineering, data governance, data science roles Experience in Generative AI or Central / Supervisory banking is a plus. Strong conceptual knowledge and practical experience in the Development, Validation and Deployment of ML/AL models Hands-on programming experience with any of the analytics tools and visualization tools (Python, R, PySpark, SAS, SQL, PowerBI/ Tableau) Knowledge of big data, ML ops and cloud platforms (Azure/GCP/AWS) Strong written and oral communication skills Project management skills and the ability to manage multiple tasks concurrently Strong delivery experience of short and long term analytics projects Additional Information: Opportunity to work on innovative projects. Career growth and leadership exposure. About Our Company | Accenture

Posted 1 day ago

Apply

4.0 - 7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 1 day ago

Apply

4.0 - 7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: GCP Data Engineer Location: Remote (Only From India) Employment Type: Contract Long-Term Start Date: Immediate Time Zone Overlap: Must be available to work during EST hours (Canada) Dual Employment: Not permitted – must be terminated if applicable About the Role: We are looking for a highly skilled GCP Data Engineer to join our international team. The ideal candidate will have strong experience with Google Cloud Platform's data tools, particularly DataProc and BigQuery, and will be comfortable working in a remote, collaborative environment. You will play a key role in designing, building, and optimizing data pipelines and infrastructure that drive business insights. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes on GCP Leverage GCP DataProc and BigQuery to process and analyze large volumes of data Write efficient, maintainable code using Python and SQL Develop Spark-based data workflows using PySpark Collaborate with cross-functional teams in an international environment Ensure data quality, integrity, and security Participate in code reviews and optimize system performance Required Qualifications: 5–8 years of hands-on experience in Data Engineering Proven expertise in GCP DataProc and BigQuery Strong programming skills in Python and SQL Solid experience with PySpark for distributed data processing Fluent English with excellent communication skills Ability to work independently in a remote team environment Comfortable with working during Canada EST time zone overlap Optional / Nice-to-Have Skills: Experience with additional GCP tools and services Familiarity with CI/CD for data engineering workflows Exposure to data governance and data security best practices Interview Process: Technical Test (Online screening) 15-minute HR Interview Technical Interview with 1–2 rounds Please share only if you match above JD at hiring@khey-digit.com

Posted 1 day ago

Apply

0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Job Description TRAINING DEVELOPER – NIQ, CHENNAI/Pune, India About This Job As Training Developer, you will be responsible for designing, developing, managing, organizing and conducting technical workshops, and evaluation of workshop effectiveness. Responsibilities Responsible to Plan, design & create (or curate) the training materials on company products, processes, and technologies Orient trainers about the content and scope of the training Deliver trainings when required Updates the content on an on-going basis Analyzes course evaluations to judge effectiveness of training sessions and to implement suggestions for improvement Interpret data to judge progress and cost-effectiveness Help the Tech Competency Leader to evaluate external learning vendors Qualifications A LITTLE BIT ABOUT YOU Proven experience as a Tech expert (2 yrs) willing to learn training skills Proven ability to complete full training cycle (assess needs, plan, develop, coordinate, monitor and evaluate) willing to learn tech skills Proven track record in diverse technical competencies - Java Full stack or Data Engineering with Python/ Pyspark /Databricks or Data Science, AI/ML track or Data and Business Intelligence stack as trainer/developer. Working knowledge of Apache Airflow is an added advantage Besides proficiency with M365 Platform and well-versed on the various LMS platforms available in the market Excellent verbal and written communicators Facilitation / Organization skills Qualifications Degree at university level, preferable in technical/educational related area Train The Trainer certification is a plus Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 day ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Principal Data Scientist Primary Skills Hypothesis Testing, T-Test, Z-Test, Regression (Linear, Logistic), Python/PySpark, SAS/SPSS, Statistical analysis and computing, Probabilistic Graph Models, Great Expectation, Evidently AI, Forecasting (Exponential Smoothing, ARIMA, ARIMAX), Tools(KubeFlow, BentoML), Classification (Decision Trees, SVM), ML Frameworks (TensorFlow, PyTorch, Sci-Kit Learn, CNTK, Keras, MXNet), Distance (Hamming Distance, Euclidean Distance, Manhattan Distance), R/ R Studio Job requirements JD is below: The Agentic AI Lead/Architect is a pivotal role responsible for driving the research, development, and deployment of semi-autonomous AI agents to solve complex enterprise challenges. This role involves hands-on experience with LangGraph, leading initiatives to build multi-agent AI systems that operate with greater autonomy, adaptability, and decision-making capabilities. The ideal candidate will have deep expertise in LLM orchestration, knowledge graphs, reinforcement learning (RLHF/RLAIF), and real-world AI applications. As a leader in this space, they will be responsible for designing, scaling, and optimizing agentic AI workflows, ensuring alignment with business objectives while pushing the boundaries of next-gen AI automation. Key Responsibilities Architecting & Scaling Agentic AI Solutions Design and develop multi-agent AI systems using LangGraph for workflow automation, complex decision-making, and autonomous problem-solving. Build memory-augmented, context-aware AI agents capable of planning, reasoning, and executing tasks across multiple domains. Define and implement scalable architectures for LLM-powered agents that seamlessly integrate with enterprise applications. Hands-On Development & Optimization Develop and optimize agent orchestration workflows using LangGraph, ensuring high performance, modularity, and scalability. Implement knowledge graphs, vector databases (Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) techniques for enhanced agent reasoning. Apply reinforcement learning (RLHF/RLAIF) methodologies to fine-tune AI agents for improved decision-making. Driving AI Innovation & Research Lead cutting-edge AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. Stay ahead of advancements in multi-agent systems, AI planning, and goal-directed behavior, applying best practices to enterprise AI solutions. Prototype and experiment with self-learning AI agents, enabling autonomous adaptation based on real-time feedback loops. AI Strategy & Business Impact Translate Agentic AI capabilities into enterprise solutions, driving automation, operational efficiency, and cost savings. Lead Agentic AI proof-of-concept (PoC) projects that demonstrate tangible business impact and scale successful prototypes into production. Mentorship & Capability Building Lead and mentor a team of AI Engineers and Data Scientists, fostering deep technical expertise in LangGraph and multi-agent architectures. Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Python + AWS/DataBricks Developer 📍 Hyderabad (Work from Office) 📅 5+ years experience | Immediate joiners preferred 🔹 Must-have Skills: Expert Python programming (3.7+) Strong AWS (EC2, S3, Lambda, Glue, CloudFormation) DataBricks platform experience ETL pipeline development SQL/NoSQL databases PySpark/Pandas proficiency 🔹 Good-to-have: AWS certifications Terraform knowledge Airflow experience Interested candidates can share profiles to shruti.pandey@codeethics.in Please mention the position you're applying for! #Hiring #ReactJS #Python #AWS #DataBricks #HyderabadJobs #TechHiring #WFO

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title: Senior Machine Learning Engineer (Azure ML + Databricks + MLOps) Experience: 5+ years in AI/ML Engineering Employment Type: Full-Time Job Summary: We are looking for a Senior Machine Learning Engineer with strong expertise in Azure Machine Learning and Databricks to lead the development and deployment of scalable AI/ML solutions. You’ll work with cross-functional teams to design, build, and optimize machine learning pipelines that power critical business functions. Key Responsibilities: Design, build, and deploy scalable machine learning models using Azure Machine Learning (Azure ML) and Databricks . Develop and maintain end-to-end ML pipelines for training, validation, and deployment. Collaborate with data engineers and architects to structure data pipelines on Azure Data Lake , Synapse , or Delta Lake . Integrate models into production environments using Azure ML endpoints , MLflow , or REST APIs . Monitor and maintain deployed models, ensuring performance and reliability over time. Use Databricks notebooks and PySpark to process and analyze large-scale datasets. Apply MLOps principles using tools like Azure DevOps , CI/CD pipelines , and MLflow for versioning and reproducibility. Ensure compliance with data governance, security, and responsible AI practices. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of hands-on experience in machine learning or data science roles. Strong proficiency in Python , and experience with libraries like Scikit-learn , XGBoost , PyTorch , or TensorFlow . Deep experience with Azure Machine Learning services (e.g., workspaces, compute clusters, pipelines). Proficient in Databricks , including Spark (PySpark), notebooks, and Delta Lake. Strong understanding of MLOps, experiment tracking, model management, and deployment automation. Experience with data engineering tools (e.g., Azure Data Factory, Azure Data Lake, Azure Synapse). Preferred Skills: Azure certifications (e.g., Azure AI Engineer Associate , Azure Data Scientist Associate ). Familiarity with Kubernetes , Docker , and container-based deployments. Experience working with structured and unstructured data (NLP, time series, image data, etc.). Knowledge of cost optimization , security best practices , and scalability on Azure. Experience with A/B testing, monitoring model drift, and real-time inference. Job Types: Full-time, Permanent Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Work Location: In person

Posted 1 day ago

Apply

10.0 years

5 - 10 Lacs

Hyderābād

Remote

Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and delivering ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. Specialist IS Software engineer Live What you will do Let’s do this. Let’s change the world. In this vital role We are looking for a creative and technically skilled Specialist IS Software engineer - Data Management Lead . This role will be responsible for leading data management initiatives collaborating across business, IT, and data governance teams. The ideal candidate will have extensive experience in configuring and implementing Collibra products, established track record of building high-quality data governance and data quality solutions with a strong hands-on design and engineering skills. The candidate must also possess strong analytical and communication skills. As a Collibra Lead Developer, you will play a key role in the design, implementation, and management of our Collibra Data Governance and Data Quality platform. You will work closely with stakeholders across the organization to ensure the successful deployment of data governance processes, solutions, and best practices. Building and integrating information systems to meet the company’s needs. Design and implement data governance frameworks, policies, and procedures within Collibra. Configure, implement, and maintain Collibra Data Quality Center to support enterprise-wide data quality initiatives Lead the implementation and configuration of Collibra Data Governance platform. Develop, customize, and maintain Collibra workflows, dashboards, and business rules. Collaborate with data stewards, data owners, and business analysts to understand data governance requirements and translate them into technical solutions Provide technical expertise and support to business users and IT teams on Collibra Data Quality functionalities. Collaborate with data engineers and architects to implement data quality solutions within data pipelines and data warehouses. Participate in data quality improvement projects, identifying root causes of data issues and implementing corrective actions Integrate Collibra with other enterprise data management systems (e.g., data catalogs, BI tools, data lakes). Provide technical leadership and mentoring to junior developers and team members. Troubleshoot and resolve issues with Collibra environment and data governance processes. Assist with training and enablement of business users on Collibra platform features and functionalities. Stay up to date with new releases, features, and best practices in Collibra and data governance. Basic Qualifications: Master’s degree in computer science & engineering preferred with 10+ years of software development experience OR, Bachelor’s degree in computer science & engineering preferred with 10+ years of software development experience Proven experience (7+ years) in data governance or data management roles. Strong experience with Collibra Data Governance platform, including design, configuration, and development. Hands-on experience with Collibra workflows, rules engine, and data stewardship processes. Experience with integrations between Collibra and other data management tools. Proficiency in SQL and scripting languages (e.g., Python, JavaScript). Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills to work with both technical and non-technical stakeholders Self-starter with strong communication and collaboration skills to work effectively with cross-functional teams. Excellent problem-solving skills and attention to detail. Domain knowledge of the Life sciences Industry Recent experience working in a Scaled Agile environment with Agile tools, e.g. Jira, Confluence, etc. Preferred Qualifications: Deep expertise in Collibra platform including Data Governance and Data Quality. In-depth knowledge of data governance principles, data stewardship processes, data quality concepts, data profiling and validation methodologies, techniques, and best practices. Hands-on experience in implementing and configuring Collibra Data Governance, Collibra Data Quality, including developing metadata ingestion, data quality rules, scorecards, and workflows. Strong experience in configuring and connecting to various data sources for metadata, data lineage, data profiling and data quality. Experience integrating data management capabilities (MDM, Reference Data) Good experience with Azure cloud services, Azure Data technologies and Databricks Solid understanding of relational database concepts and ETL processes. Proficient use of tools, techniques, and manipulation including programming languages (Python, PySpark, SQL etc.), for data profiling, and validation. Data modeling with tools like Erwin and knowledge of insurance industry standards (e.g., ACORD) and insurance data (policy, claims, underwriting, etc.). Familiarity with data visualization tools like Power BI. Good to Have Skills Willingness to work on AI Applications Experience with popular large language models Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Cortex is urgently hiring for the role : ''Data Engineer'' Experience: 5 to 8 years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 10days only Key skills: Candidates Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks Role Overview We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing. This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks. Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows. Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering. Stay updated with the latest cloud technologies, big data frameworks, and industry trends. If you are interested kindly send your resume to us by just clicking '' easy apply''. This job is posted by Aishwarya.K Business HR - Day recruitment Cortex Consultants LLC (US) | Cortex Consulting Pvt Ltd (India) | Tcell (Canada) US | India | Canada

Posted 1 day ago

Apply

4.0 years

3 - 6 Lacs

Gurgaon

On-site

The Deputy Manager is primarily responsible for using data extraction tools to perform in-depth analysis of programs and opportunities in the collections business. The Deputy Manager will make recommendations to improve the business profitability or operational processes based on their analysis and design strategies to implement those recommendations. The role is also responsible to own syndication of findings and manage implementation with support. Responsibilities Coach new team members on technical skills and business knowledge.-5% Develop and implement analytics best practices and knowledge management practices.-5% Makes recommendations to improve business profitability or processes. Estimate opportunity size and develop business case. Manage implementation of ideas and project plans with minimal support. 30% Present and share data with other team members and to leadership independently. 10% Understand end-to-end business processes. Independently extract, prepare and analyze gigabytes of data to support business initiatives (e.g. profitability, performance, variance analysis etc). Develop solutions with minimal support. Develop techniques and computer algorithms for data analysis for making it meaningful and actionable. 50% Education MINIMUM REQUIREMENTS EDUCATION: Bachelor's FIELD OF STUDY: Strong and consistent academic record in engineering, quantitative or statistical field.. EXPERIENCE: 4-7 years experience in analytics or consulting including 2+ years in Financial Services. Language Required: English PREFERRED QUALIFICATIONS EDUCATION: Bachelor's FIELD OF STUDY: S trong and consistent academic record in engineering, quantitative or statistical field. EXPERIENCE: Required: 4-7 years experience in analytics or consulting. Expert knowledge of Azure / Python incl. Pandas, Pyspark / SQL. Demonstrated experience in unstructured problem solving and strong analytical aptitude. Advanced use of MS Office( Excel, PowerPoint). Strong Communication (Written and Verbal) Storyboarding and Presentation Skills Project Management Ability to multitask. What We Offer We understand the important balance between work and life, fun and professionalism, and corporation verse community. We strive to support your career aspirations and provide the benefits you need to live a more fulfilling life. Our compensation and benefits programs were created with an 'Employee-First Approach' focused on supporting, developing, and recognizing YOU. We offer a wide array of wellness and mental health initiatives, support volunteerism, and environmental efforts, encourage employee education through leadership training, skill-building, and tuition reimbursements, and always strive to provide promotion opportunities from within. All these things are just a small way to show our employees that we recognize their value, we understand what is important to them, and we reward their contributions. About Us Headquartered in the United States, Encore Capital Group (Encore) is a publicly traded international specialty finance company operating in various countries around the globe. Through our businesses - such as Midland Credit Management and Cabot Credit Management - we help consumers to restore their financial health as we further our Mission of creating pathways to economic freedom. Our commitment to building a positive workplace culture and a best-in-class employee experience have earned us accolades including Great Place to Work® certifications in many geographies where we operate. If you have a passion for helping others and thrive at a company that values innovation, inclusion and excellence, then Encore Capital Group is the right place for you. Encore Capital Group and all of its subsidiaries are proud to be an equal opportunity employer and are committed to fostering an inclusive and welcoming environment where everyone feels they belong. We encourage candidates from all backgrounds to apply. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status, or any other status protected under applicable law. If you wish to discuss potential accommodations related to applying for employment, please contact careers.india@mcmcg.com

Posted 1 day ago

Apply

2.0 years

6 - 8 Lacs

Guwahati

On-site

Full Time Guwahati Experience - CTC- Best In Industry Job Description We are seeking a talented and experienced Data Engineer / Python Backend Engineer to join our dynamic team. The ideal candidate should have a strong background in Python development, with proficiency in backend frameworks such as FastAPI and Django. Additionally, they should possess solid expertise in data engineering concepts and tools, including Pandas, Numpy and Dataframe API. Experience with data warehousing, data modeling, and scaling techniques is highly desirable. Roles and Responsibilities Design, develop, and maintain backend services and APIs using FastAPI and Django. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Implement data engineering pipelines for processing, transforming, and analyzing large datasets. Optimize data storage and retrieval processes for performance and scalability. Ensure data quality and integrity through rigorous testing and validation procedures. Stay up-to-date with emerging technologies and best practices in data engineering and backend development. Required Skills Bachelor’s degree in computer science, Engineering, or related field. 2+ years of experience in Python development, with a focus on backend frameworks like FastAPI and Django. Expertise in Object Oriented Design, Database Design is must. Proficiency in database technologies such as PostgreSQL and MySQL. Hands-on experience writing SQL queries and knowledge about query performance optimization. Strong understanding of data engineering frameworks, including Pandas, Numpy, and Polars(optional). Familiarity with data warehousing concepts and methodologies. Solid grasp of scaling techniques and optimization strategies for handling large datasets. Nice to Have: Familiarity with Pyspark and Kafka. Experience with containerization tools like Docker. Familiarity with cloud platforms such as AWS or Azure.

Posted 1 day ago

Apply

3.0 - 5.0 years

3 - 8 Lacs

Chennai

On-site

3 - 5 Years 5 Openings Bangalore, Chennai, Kochi, Trivandrum Role description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes: Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Outputs Expected: Code: Develop code independently for the above Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution Domain relevance: Develop features and components with good understanding of the business problem being addressed for the client Manage Project: Manage module level activities Manage Defects: Perform defect RCA and mitigation Estimate: Estimate time effort resource dependence for one's own work and others' work including modules Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards/process Release: Execute release process Design: LLD for multiple components Mentoring: Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples: Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments: Design, develop, and optimize large-scale data pipelines using Azure Databricks (Apache Spark). Build and maintain ETL/ELT workflows and batch/streaming data pipelines. Collaborate with data analysts, scientists, and business teams to support their data needs. Write efficient PySpark or Scala code for data transformations and performance tuning. Implement CI/CD pipelines for data workflows using Azure DevOps or similar tools. Monitor and troubleshoot data pipelines and jobs in production. Ensure data quality, governance, and security as per organizational standards. Skills Databricks,Adb,Etl About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

3.0 years

11 - 24 Lacs

Chennai

On-site

Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance. Contribute to development, maintenance, testing strategy, design discussions, and operations of the team. Participate in all aspects of agile software development including design, implementation, and deployment. Responsible for the end-to-end lifecycle of new product features / components. Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design. Work with a small, cross-functional team on products and features to drive growth. Learning new tools, languages, workflows, and philosophies to grow. Research and suggest new technologies for boosting the product. Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more. Qualifications Excellent team player with strong communication skills. B.Sc. in Computer Sciences or similar. 3-5 years of experience in Data Pipeline development. 3-5 years of experience in PySpark / Databricks. 3-5 years of experience in Python / Airflow. Knowledge of OOP and design patterns. Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking). Fluent with large scale SQL databases. Good problem-solving and analysis abilities. Requirements - Advantage Experience with Azure cloud services. Experience with Agile Development methodologies. Experience with Git. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 day ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Noida

On-site

Job Information: Work Experience: 4-5 years Industry: IT Services Job Type: FULL TIME Location: Noida, India Job Overview: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources. Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval. Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting. Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights. Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions. Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed. Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies. Strong experience with PySpark for large-scale data processing and transformation. Expertise in SQL and data modeling for relational and non-relational databases. Experience building and optimizing ETL pipelines and data integration workflows. Familiarity with business intelligence and visualization tools, especially Amazon QuickSight. Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting. Ability to work collaboratively in agile environments and manage multiple priorities effectively. Excellent problem-solving and communication skills. Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer). Good to Have Skills: Understanding of machine learning, deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning. Interview Process Internal Assessment 3 Technical Rounds

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description Essential Functions Independently develop and deploy ETL jobs in a fast-paced object oriented environment Capable enough to understand and Receive business requirements from clients via a Business Analyst/architect/development lead to successfully develop applications, functions, and processes. Conducts and is accountable for unit testing on development assignments. Must be detail-oriented with ability to follow-through on issues. Must be able to work on and manage multiple tasks in addition to working with other areas within the department. Utilizes numerous sources to obtain and build development skills. Enhances existing applications to meet the needs of ongoing efforts within software platforms. Records and tracks time worked on projects and assignments. Develops a general understanding of TSYS/Global Payments, software platforms, and the credit card industry. Participates in team, department, and division meetings as required. Performs other duties as assigned. Skills/technical Knowledge 5 to 8 years of strong development background in ETL tools like GCP-Data Flow , Pyspark , SSIS, Snowflake, DBT Experience in Google cloud platform - GCP Pub/Sub, Datastore, BigQuery, AppEngine, Compute Engine, Cloud SQL, Memory Store, Redis etc Experience in AWS/SNOWFLAKE/AZURE is preferred Proficient in Java, Python , Pyspark Proficient in GCP-Big Query , Composer , AirFlow , Pub Sub , Cloud storage Experience in building tools (e.g., Maven, Gradle etc.) Proficient in Code repo management, branching strategy, Version controlling using GIT, VSTS & Teamforge etc Developing an application using Eclipse IDE or IntelliJ Excellent knowledge of Relational Databases, SQL & JDBC drivers Experience with API Gateways - Datapower, APIM , Apigee etc Strong analytical, planning, and organizational skills with an ability to manage competing demand Excellent communication skills, verbal and written; should be able to collaborate across business teams (stakeholders) and other technology groups as needed. Experience in NO-SQL databases is preferred Exposure to Payments industry is a plus Minimum Qualification Minimum 5 To 8 Years Of Relevant Experience. Software Engineering, Payment Information Systems or any Technical degree; additional experience in lieu of degree will be considered

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices.

Posted 1 day ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Jagatpura, Jaipur, Rajasthan

On-site

AWS Data Engineer Location: Jaipur Mode: On-site Experience: 2+ Years The Role Zynsera is looking for a talented AWS Data Engineer to join our dynamic team! If you have a strong grasp of AWS services, serverless data pipelines, and Infrastructure as Code — let’s connect. As an AWS Data Engineer at Zynsera, you will: Develop and optimize data pipelines using AWS Glue, Lambda, and Athena Build infrastructure using AWS CDK for automation and scalability Manage structured and semi-structured data with AWS Lakehouse & Iceberg Design serverless architectures for batch and streaming workloads Collaborate with senior engineers to drive performance and innovation You're a Great Fit If You Have: Proficiency in AWS Glue, Lambda, Athena, and Lakehouse architecture Experience with CDK, Python, PySpark, Spark SQL, or Java/Scala Familiarity with data lakes, data warehousing, and scalable cloud solutions (Bonus) Knowledge of Firehose, Kinesis, Apache Iceberg, or DynamoDB Job Types: Full-time, Permanent Pay: ₹25,316.90 - ₹45,796.55 per month Ability to commute/relocate: Jagatpura, Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Experience: AWS Data Engineer: 1 year (Required) Work Location: In person

Posted 1 day ago

Apply

0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Azure Data Engineer (ADF , ADB, Pyspark, Synapse) Must-Have PySpark, SQL, Azure Services (ADF, Databrics, Synapse) Good-to-Have Python, Azure Key Vault EXP- 10 to 12 Location- Bhubaneswar

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies