Home
Jobs
Companies
Resume

5317 Pyspark Jobs - Page 49

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad / Chennai / Pune/ Mumbai (Hybrid) Notice Period : upto 60days About Us: Zemoso Technologies is a Software Product Market Fit Studio that brings Silicon Valley- style rapid prototyping and rapid application builds to Entrepreneurs and Corporate innovation. We offer Innovation as a service and work on ideas from scratch and take them to the Product Market Fit stage using Design Thinking -> Lean Execution -> Agile Methodology. We were featured as one of Deloitte's Fastest 50 growing tech companies from India thrice (2016, 2018, and 2019). We were also featured in Deloitte Technology Fast 500 Asia Pacific both in 2016 and 2018. We are located in Hyderabad, India, Dallas, US & have recently incorporated another office in Waterloo, Canada. What You Will Do: - Develop innovative software solutions using design thinking, lean, and agile methodologies. - Work on high-quality software products using the latest technologies and platforms. - Collaborate with fast-paced, dynamic teams to deliver value-driven client experiences. - Mentor and contribute to the growth of the next generation of developers. Must-Have Skills: - Experience: 3+ years. - Strong proficiency in Python programming language and Django. - Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. Nice to Have Qualifications: - Experience with Pandas and PySpark. - Product and customer-centric mindset. - Great Object-Oriented skills, including design patterns. - Good to great problem-solving and communication skills. - Experience in working with cross-border, distributed teams. Get to know us better: https://www.zemosolabs.com Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

13 - 23 Lacs

Gurugram

Work from Office

Naukri logo

Job Title: Data Engineer Location: [Gurugram Experience: [3-8 years] Job Type: [Full-time ] About the Role We are looking for a skilled Data Engineer to join our team. The ideal candidate will have hands-on experience designing, building, and maintaining scalable data pipelines using PySpark, SQL, and cloud technologies like AWS and Snowflake. You will work closely with data scientists, analysts, and other stakeholders to deliver reliable, high-performance data solutions. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines using PySpark and SQL for processing large datasets. Build and maintain data warehouses and data lakes on Snowflake and AWS. Implement data ingestion, transformation, and integration from diverse sources. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Monitor and troubleshoot pipeline performance, ensuring data quality and reliability. Automate data workflows and optimize data storage for cost and efficiency. Stay up-to-date with industry best practices and emerging technologies in data engineering. Required Skills & Qualifications Strong experience with PySpark for big data processing and ETL pipeline development. Proficient in writing complex SQL queries and optimizing them for performance. Hands-on experience with AWS services such as S3, Glue, Lambda, EC2, and Redshift. Expertise in designing and managing data warehousing solutions using Snowflake . Familiarity with data modeling, schema design, and data governance. Experience with version control systems (e.g., Git) and CI/CD pipelines. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications Experience with orchestration tools like Apache Airflow. Knowledge of Python scripting beyond PySpark. Understanding of data security and compliance standards. Experience with containerization tools like Docker and Kubernetes. Education Bachelors or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 1 week ago

Apply

170.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are M&G Global Services Private Limited (formerly known as 10FA India Private Limited, and prior to that Prudential Global Services Private Limited). We are a fully owned subsidiary of the M&G plc group of companies, operating as a Global Capability Centre providing a range of value adding services to the Group since 2003. At M&G our purpose is to give everyone real confidence to put their money to work. As an international savings and investments business with roots stretching back more than 170 years, we offer a range of financial products and services through Asset Management, Life and Wealth. All three operating segments work together to deliver attractive financial outcomes for our clients, and superior shareholder returns. M&G Global Services has rapidly transformed itself into a powerhouse of capability that is playing an important role in M&G plc’s ambition to be the best loved and most successful savings and investments company in the world. Our diversified service offerings extending from Digital Services (Digital Engineering, AI, Advanced Analytics, RPA, and BI & Insights), Business Transformation, Management Consulting & Strategy, Finance, Actuarial, Quants, Research, Information Technology, Customer Service, Risk & Compliance and Audit provide our people with exciting career growth opportunities. Through our behaviours of telling it like it is, owning it now, and moving it forward together with care and integrity; we are creating an exceptional place to work for exceptional talent. Job Description Job Title Lead Data Engineer Grade 2B Level Senior Manager – Data Job Function Digital Transformation Job Sub Function Azure Data Engineering & DevOps & BI Reports to 3B (VP – Data Engineering) Location Mumbai Business Area M&G Global Services Overall Job Purpose To implement data engineering solutions using latest technologies available in Azure Cloud space conforming to the best in class design standard & agreed requirements to achieve business objective Accountabilities/Responsibilities Lead data engineering projects to build and operationalize data solutions for business using Azure services in combination with custom solutions – Azure Data Factory, Azure Data Flows, Azure Databricks, Azure Data Lake Gen 2, Azure SQL etc Proven experience on leading a team of data engineers providing technical guidance and ensuring alignment with agreed architectural principles Experience in migrating on-premise data warehouses to data platforms on AZURE cloud Designing and implementing data engineering, ingestion and transformation functions using ADF, Databricks Proficient in Py-Spark Experience in building Python based APIs on Azure Function Apps Experience on Azure Logic apps Experience in Lakehouse/Datawarehouse implementation using modern data platform architecture Capacity Planning and Performance Tuning on ADF & Databricks pipelines Support data visualization development using Power BI Exposure across all the SDLC process, including testing and deployment Experience in relational and dimensional modelling, including big data technologies Experience in Azure DevOps – Build CI/CD pipelines for ADF, ADLS, Databricks, Azure SQL DB etc Experience of working in secured Azure environments using Azure KeyVaults, Service Principals, and Managed Identities Good to have knowledge on Apigee (Googles API Management) Understanding of data masking, encryption and other practices used in handling sensitive data Ability to interact with Business for requirement gathering and query resolutions Working on off shore office based development teams, collaborating within a team environment and participating in typical project lifecycle activities such as requirement analysis, testing and release Develop Azure Data skills within the team through knowledge sharing sessions, articles, etc. Adherence to organisations Risk & Controls requirements Should have skills for Stakeholder management, process adherence, planning & documentationss Key Stakeholder Management Internal Business Teams Project Manager Architects Data Scientists Team members External Knowledge, Skills, Experience & Educational Qualification Knowledge & Skills: Azure Data Factory, Azure Data Lake Storage V2 Azure SQL Azure DataBricks Pyspark Azure DevOps Power BI Report Technical leadership Confidence & excellent communication Experience: Overall 12+ years of experience 5 + Experience on Azure data engineering 5 + experience of managing data deliveries Educational Qualification: Graduate/Post-graduate. Preferably with specialisation in Computer Science, Statistics, Mathematics, Data Science, Engineering or related discipline Microsoft Azure certification (good to have) M&G Behaviours relevant to all roles: Inspire Others: support and encourage each other, creating an environment where everyone can contribute and succeed Embrace Change: be open to change, willing to be challenged and able to adapt quickly and imaginatively to new ideas Deliver Results: focus on performance, set high standards and deliver with energy and determination Keep it simple: cut through complexity, keep the outcome in mind, keeping your approach simple and adapting your message to every audience We have a diverse workforce and an inclusive culture at M&G Global Services, regardless of gender, ethnicity, age, sexual orientation, nationality, disability or long term condition, we are looking to attract, promote and retain exceptional people. We also welcome those who take part in military service and those returning from career breaks. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Associate Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family. We have a job opportunity for AWS Pyspark at Tata Consultancy Services at 14th June 2025. Hiring For : AWS Pyspark Mandatory Skills: Design and implement data pipelines, ETL processes, and data storage solutions that support data-intensive applications(4+ Years) • Develop, test, and maintain architectures such as databases and large-scale data processing systems using tools such as Spark, Databricks and AWS (4+ Years) • Solid in Java/Python, Datastructure and Algorithms (6+) • Deep experience in cloud development with the AWS platform (3+ years) Walk in Location: Pune Experience : 5-15 years Mode of interview: in-person walk in drive Date of interview: 14 June 2025 Venue : Zone 3 Auditorium, Tata Consultancy Services, Sahyadri Park, Rajiv Gandhi Infotech Park, Hinjewadi Phase 3, Pune – 411057 If you are interested in this exciting opportunity, Please share your updated resume on jeena.james1@tcs.com along with the additional information mentioned below: Name: Preferred Location: Contact No: Email id: Highest Qualification: Current Organization Total Experience: Relevant Experience in AWS Pyspark: Current CTC: Expected CTC: Notice Period: Gap Duration: Gap Details: Attended interview with TCS in past(details): Please share your I begin portal EP id if already registered: Willing to attend walk in on 14th June: (Yes/No) Note: only Eligible candidates with Relevant experience will be contacted further Thanks & Regards, Jeena James, Website: http://www.tcs.com Email: jeena.james1@tcs.com Human Resource - Talent Acquisition Group, Tata Consultancy Services Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your Primary Responsibilities Include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Preferred Education Master's Degree Required Technical And Professional Expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred Technical And Professional Experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Are you passionate about shaping the future of travel? Do you want to redefine how people plan, search, and book their journeys? At Expedia Group, we believe travel connects the world — and we're driven by the endless possibilities it creates. We’re looking for a skilled and driven Data Scientist III to support our content product teams with deep analytics and decision-making insights. This role collaborates cross-functionally with product, engineering, data, strategy, and other business units to optimize user engagement, conversion, traffic, and support the development of innovative content products. In This Role, You Will Deliver actionable insights into customer behavior and identify opportunities to improve our site and app experiences for travelers worldwide. Partner closely with product managers and engineers to design, implement, and analyze A/B tests and other experimentation frameworks. Act as a trusted analytics advisor to cross-functional teams, offering clear communication and updates on project progress and outcomes. Collaborate with data engineering to design and maintain scalable, efficient data pipelines and infrastructure. Develop and maintain robust dashboards and automated reporting tools to streamline recurring analysis. Frame complex business problems, extract and analyze data, and present key findings to leadership and stakeholders. Own and drive high-impact analytical projects from conception through execution. Experience And Qualifications 5+ years of experience in quantitative analysis, with a passion for tackling complex business problems. Proficient in SQL and Excel, with hands-on experience using Python and/or PySpark for data transformation, analysis, and visualization. Familiar with machine learning techniques, including supervised and unsupervised models. Experienced in advanced analytics methods such as predictive modeling, hypothesis testing, A/B testing, and quasi-experimental design. Skilled in data visualization using tools like Tableau. Experience with Adobe Analytics or Google Analytics is a plus. Able to thrive in a fast-paced, dynamic environment, balancing multiple priorities with strong ownership and proactive problem-solving skills. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Acuity Knowledge Partners is hiring ' Data Engineers ' to play a vital role in our Data and Technology Services team. The role holder will primarily collaborate closely with a global leading hedge fund on data engagements and partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. ' Immediate Joiners are highly preferred ' Total Experience: 3 to 7 years Job location: Gurugram/ Bengaluru/ Pune (hybrid) Mandatory Requirements: Candidates should have a Bachelors or master’s in science or engineering disciplines (CSE/ IT/ ECE or E&T). Strong coding skills in Python and SQL, including core programming and data manipulation; hands-on experience with cloud native platforms such as AWS; and solid proficiency in core data engineering concepts such as PySpark, DataFrames, with practical experience in Snowflake and Docker. 3+ years of strong hands-on experience as a Data Engineer with data modeling, data warehousing, and building data pipelines. Expert in Python and Snowflake data ingestion and manipulation on the back end. Expert working in different data formats i.e. parquet, AVRO, JSON, XLXS, etc. and experience with web scraping and checking/ editing web scrape codes. Experience working with FTP, SFTP, API, S3 and other distribution channels to source data. Excellent communication skills, both written and verbal. Key Responsibilities: Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Engage with vendors and technical teams to systematically ingest, evaluate, and create valuable data assets. Collaborate with core engineering team to create central capabilities to process, manage, monitor, and distribute datasets at scale. Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of datasets. Engage with technical and non-technical clients as SME on data asset offerings. Interested folks, please share your updated CV at nihal.upadhyay@acuitykp.com Show more Show less

Posted 1 week ago

Apply

5.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Associate Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family. We have a job opportunity for at Tata Consultancy Services at 14th June 2025. Hiring For : AWS Data Scientist Mandatory Skills: Python AI ML, MlOPs, Spark, Hadoop,PyTorch, TensorFlow,Matplotlib, Seaborn, Tableau, Power BI,scikit-learn, TensorFlow, XGBoost,AWS,Azure , AWS, Databricks,Pyspark, Python,SQL, Snowflake Walk in Location: Pune Experience : 5-15 years Mode of interview: in-person walk in drive Date of interview: 14 June 2025 Venue : Zone 3 Auditorium, Tata Consultancy Services, Sahyadri Park, Rajiv Gandhi Infotech Park, Hinjewadi Phase 3, Pune – 411057 If you are interested in this exciting opportunity, Please share your updated resume on jeena.james1@tcs.com along with the additional information mentioned below: Name: Preferred Location: Contact No: Email id: Highest Qualification: Current Organization Total Experience: Relevant Experience in Data Scientist: Current CTC: Expected CTC: Notice Period: Gap Duration: Gap Details: Attended interview with TCS in past(details): Please share your I begin portal EP id if already registered: Willing to attend walk in on 14th June: (Yes/No) Note: only Eligible candidates with Relevant experience will be contacted further Thanks & Regards, Jeena James, Website: http://www.tcs.com Email: jeena.james1@tcs.com Human Resource - Talent Acquisition Group, Tata Consultancy Services Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Moon#168 - Senior Data Engineer Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview Ethoca, a Mastercard Company is seeking a Senior Data Engineer to join our team in Pune, India to drive data enablement and explore big data solutions within our technology landscape. The role is visible and critical as part of a high performing team – it will appeal to you if you have an effective combination of domain knowledge, relevant experience and the ability to execute on the details. You will bring cutting edge software and full stack development skills with advanced knowledge of cloud and data lake experience while working with massive data volumes. You will own this – our teams are small, agile and focused on the needs of the high growth fintech marketplace. You will be working across functional teams within Ethoca and Mastercard to deliver on cloud strategy. We are committed in making our systems resilient and responsive yet easily maintainable on cloud. Key Responsibilities Design, develop, and optimize batch and real-time data pipelines using Snowflake, Snowpark, Python, and PySpark. Build data transformation workflows using dbt, with a strong focus on Test-Driven Development (TDD) and modular design. Implement and manage CI/CD pipelines using GitLab and Jenkins, enabling automated testing, deployment, and monitoring of data workflows. Deploy and manage Snowflake objects using Schema Change, ensuring controlled, auditable, and repeatable releases across environments. Administer and optimize the Snowflake platform, handling performance tuning, access management, cost control, and platform scalability. Drive DataOps practices by integrating testing, monitoring, versioning, and collaboration into every phase of the data pipeline lifecycle. Build scalable and reusable data models that support business analytics and dashboarding in Power BI. Develop and support real-time data streaming pipelines (e.g., using Kafka, Spark Structured Streaming) for near-instant data availability. Establish and implement data observability practices, including monitoring data quality, freshness, lineage, and anomaly detection across the platform. Plan and own deployments, migrations, and upgrades across data platforms and pipelines to minimize service impacts, including developing and executing mitigation plans. Collaborate with stakeholders to understand data requirements and deliver reliable, high-impact data solutions. Document pipeline architecture, processes, and standards, promoting consistency and transparency across the team. Apply exceptional problem-solving and analytical skills to troubleshoot complex data and system issues. Demonstrate excellent written and verbal communication skills when collaborating across technical and non-technical teams. Required Qualifications Tenured in the fields of Computer Science/Engineering or Software Engineering. Bachelor's degree in computer science, or a related technical field including programming. Deep hands-on experience with Snowflake (including administration), Snowpark, and Python. Strong background in PySpark and distributed data processing. Proven track record using dbt for building robust, testable data transformation workflows following TDD. Familiarity with Schema Change for Snowflake object deployment and version control. Proficient in CI/CD tooling, especially GitLab and Jenkins, with a focus on automation and DataOps. Experience with real-time data processing and streaming pipelines. Strong grasp of cloud-based database infrastructure (AWS, Azure, or GCP). Skilled in developing insightful dashboards and scalable data models using Power BI. Expert in SQL development and performance optimization. Demonstrated success in building and maintaining data observability tools and frameworks. Proven ability to plan and execute deployments, upgrades, and migrations with minimal disruption to operations. Strong communication, collaboration, and analytical thinking across technical and non-technical stakeholders. Ideally you have experience in banking, e-commerce, credit cards or payment processing and exposure to both SaaS and premises-based architectures. In addition, you have a post-secondary degree in computer science, mathematics, or quantitative science. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-247084 Show more Show less

Posted 1 week ago

Apply

5.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Associate Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family. We have a job opportunity for AWS Data Science with Devops and Databricks at Tata Consultancy Services at 14th June 2025. Hiring For : AWS Data Science with Devops and Databricks Mandatory Skills: SQL, Pyspark/Python,AWS, Databricks, Snowflak,S3, EMR, EC2, Airflow, Lambda Experience : 5-15 years Mode of interview: in-person walk in drive Date of interview: 14 June 2025 Venue : Zone 3 Auditorium, Tata Consultancy Services, Sahyadri Park, Rajiv Gandhi Infotech Park, Hinjewadi Phase 3, Pune – 411057 If you are interested in this exciting opportunity, Please share your updated resume on jeena.james1@tcs.com along with the additional information mentioned below: Name: Preferred Location: Contact No: Email id: Highest Qualification: Current Organization Total Experience: Relevant Experience in Devops with Databricks: Current CTC: Expected CTC: Notice Period: Gap Duration: Gap Details: Attended interview with TCS in past(details): Please share your I begin portal EP id if already registered: Willing to attend walk in on 14th June: (Yes/No) Note: only Eligible candidates with Relevant experience will be contacted further Thanks & Regards, Jeena James, Website: http://www.tcs.com Email: jeena.james1@tcs.com Human Resource - Talent Acquisition Group, Tata Consultancy Services Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Key Responsibilities: Design, develop, and optimize large-scale data pipelines and workflows using Big Data technologies such as Hadoop, Hive, Impala, Spark, and PySpark. Build and maintain data integration solutions to process structured and unstructured data from various sources. Implement and manage CI/CD pipelines to automate deployment and testing of data engineering solutions. Work with relational databases like Oracle to design and optimize data storage and retrieval. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Ensure data quality, security, and governance across all data engineering processes. Monitor and troubleshoot performance issues in data pipelines and systems. Stay updated with the latest trends and advancements in Big Data and data engineering technologies. Required Skills and Qualifications: Proven experience in Big Data technologies: Hadoop, Hive, Impala, Spark, and PySpark. Strong programming skills in Python, Java, or Scala. Hands-on experience with CI/CD tools like Jenkins, Git, or similar. Proficiency in working with relational databases, especially Oracle. Solid understanding of data modeling, ETL processes, and data warehousing concepts. Experience with cloud platforms (e.g., AWS, Azure, or GCP) is a plus. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... As a DMTS (AI Science) you will own and drive end to end solutions for Cognitive and Gen AI driven use cases. Working on designing and building scalable cognitive and generative AI solutions to meet the needs of given Business engagement. Providing technical thought leadership on model architecture, delivery, monitoring, measurement and model lifecycle best practices. Working in collaborative environment with global teams to drive solutioning of business problems. Developing end to end analytical solutions, and articulating insights to leadership. Provide data-driven recommendations to business by clearly articulating complex modeling concepts through generation and delivery of presentations. Analyzing and model both structured and unstructured data from a number of distributed client and publicly available sources. Assisting with the mentorship and development of Junior members. Drive team towards solutions. Assisting in growing data science practice in Verizon, by meeting business goals through client prospecting, responding to model POC, identifying and closing opportunities within identified Insights, writing white papers, exploring new tools and defining best practices. What We’re Looking For... You have strong ML/NLP/GenAI skills and are eager to work in a collaborative environment with global teams to drive NLP/GenAI application in business problems. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various stakeholders and cross functional teams to implement data science driven business solutions. You take pride in your role as a data scientist and evangelist and enjoy adding to the systems, concepts and models that enrich the practice. You enjoy mentoring and empowering the team to expand their technical capabilities. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Six or more years of work experience. Data Scientist and thought leader with experience in implementing production use cases in Gen AI and Cognitive. Ten or more years of hands-on experience on implementation of large-scale NLP Projects and Fine tuning & Evaluation of LLMs for downstream tasks such as text generation, Classification, summarization, question answering, entity extraction etc. Working knowledge of Agentic AI frameworks like LangChain, LangGraph, CrewAI etc. Ability to guide the team to correctly analyze cognitive insights and leverage unstructured conversational data to create transformative, intelligent, context aware and adaptive AI systems. Experience in Machine Learning, Deep Learning model development & deployment from scratch in Python. Working knowledge of NLP frameworks and libraries like NLTK, Spacy, Transformers, Pytorch, Tensorflow, hugging face API's. Working knowledge of various supervised and unsupervised ML algorithms. Should know the various data preprocessing techniques and its impact on algorithm's accuracy, precision and recall. Knowledge & Implementation Experience of Deep Learning i.e Convolutional Neural Nets (CNN), Recursive Neural Nets (RNN) & Long Short-Term Memory (LSTM), Generative Adversarial Networks (GAN), Deep Reinforcement Learning. Experience with RESTful, JSON API services. Working knowledge on Word embeddings, TF-IDF, Tokenization, N-Grams, Stemmers, lemmatization, Part of speech tagging, entity resolution, ontology, lexicology, phonetics, intents, entities, and context. Experience in analyzing Live Chat/call conversation with agents. Expertise in Python, Sql, PySpark, Scala and/or other languages and tools. Understanding of validation framework for generative model output and perspective on future ready systems to scale validation. Familiarity with GPU/CPU architecture and distributed computing and general infra needs to scale Gen AI models Ability to provide technical thought leadership on model architecture, delivery, monitoring, measurement and model lifecycle best practices. Even better if you have one or more of the following: Phd or an advanced degree or specialization in Artificial Intelligence If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Location: Pune / Mumbai / Bangalore / Chennai / Hyderabad / (Hybrid) Notice Period : upto 60days Key Responsibilities: Python Proficiency: ● Demonstrate a strong command of Python programming language, actively contributing to the development and maintenance of data engineering solutions. Data Engineering Expertise: ● Set up and maintain efficient data pipelines, ensuring smooth data flow and integration across systems. Experience with SQL and Data Warehouse/ Data Lakes is required. Contribute to the establishment and maintenance of Data Lakes, implementing industry best practices. Execute data scrubbing techniques and implement data validation processes to ensure data integrity and quality. Tool and Platform Proficiency: ● Experience with PySpark and/or Data Bricks platform is required, apart from this expertise in at least one popular tool/platform within the data engineering domain will be nice to have. Stay informed about industry trends, exploring and adopting tools to optimize data engineering processes. Collaboration and Communication: ● Collaborate effectively with cross-functional teams, including data scientists, software engineers, and business analysts. Communicate technical concepts to non technical stakeholders, fostering collaboration and innovation within the team. Documentation and Best Practices: ●Contribute to maintaining comprehensive documentation for data engineering processes, ensuring knowledge transfer within the team. Adhere to best practices in data engineering, promoting a culture of quality and efficiency. Must-Have Skills: ● Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field. ● Minimum of 4 years of proven experience as a Data Engineer. ● Strong proficiency in Python programming language and SQL. ● Experience in setting up and managing data pipelines, data warehouses/lakes. ● Good comprehension and critical thinking skills Nice-to-Have Skills(Optional): ● Exposure to cloud-based data platforms (AWS/Azure/GCP) and pyspark. ● Basic knowledge of big data technologies such as Hadoop or Spark or Snowflake. ● Familiarity with containerization tools like Docker or Kubernetes. ● Interest in data visualization tools. (Tableau, PowerBI, etc.) ● Certificationsin relevant data engineering or machine learning technologies. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 12+ years of hands on experience Position: Senior Manager Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Deep expertise in AI/ML solution design, including supervised and unsupervised learning, deep learning, NLP, and optimization. Strong hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, scikit-learn, H2O, and XGBoost. Solid programming skills in Python, PySpark, and SQL, with a strong foundation in software engineering principles. Proven track record of building end-to-end AI pipelines, including data ingestion, model training, testing, and production deployment. Experience with MLOps tools such as MLflow, Airflow, DVC, and Kubeflow for model tracking, versioning, and monitoring. Understanding of big data technologies like Apache Spark, Hive, and Delta Lake for scalable model development. Expertise in AI solution deployment across cloud platforms like GCP, AWS, and Azure using services like Vertex AI, SageMaker, and Azure ML. Experience in REST API development, NoSQL database design, and RDBMS design and optimizations. Familiarity with API-based AI integration and containerization technologies like Docker and Kubernetes. Proficiency in data storytelling and visualization tools such as Tableau, Power BI, Looker, and Streamlit. Programming skills in Python and either Scala or R, with experience using Flask and FastAPI. Experience with software engineering practices, including use of GitHub, CI/CD, code testing, and analysis. Proficient in using AI/ML frameworks such as TensorFlow, PyTorch, and SciKit-Learn. Skilled in using Apache Spark, including PySpark and Databricks, for big data processing. Strong understanding of foundational data science concepts, including statistics, linear algebra, and machine learning principles. Knowledgeable in integrating DevOps, MLOps, and DataOps practices to enhance operational efficiency and model deployment. Experience with cloud infrastructure services like Azure and GCP. Proficiency in containerization technologies such as Docker and Kubernetes. Familiarity with observability and monitoring tools like Prometheus and the ELK stack, adhering to SRE principles and techniques. Cloud or Data Engineering certifications or specialization certifications (e.g. Google Professional Machine Learning Engineer, Microsoft Certified: Azure AI Engineer Associate – Exam AI-102, AWS Certified Machine Learning – Specialty (MLS-C01), Databricks Certified Machine Learning) Nice To Have Experience implementing generative AI, LLMs, or advanced NLP use cases Exposure to real-time AI systems, edge deployment, or federated learning Strong executive presence and experience communicating with senior leadership or CXO-level clients Roles And Responsibilities Lead and oversee complex AI/ML programs, ensuring alignment with business strategy and delivering measurable outcomes. Serve as a strategic advisor to clients on AI adoption, architecture decisions, and responsible AI practices. Design and review scalable AI architectures, ensuring performance, security, and compliance. Supervise the development of machine learning pipelines, enabling model training, retraining, monitoring, and automation. Present technical solutions and business value to executive stakeholders through impactful storytelling and data visualization. Build, mentor, and lead high-performing teams of data scientists, ML engineers, and analysts. Drive innovation and capability development in areas such as generative AI, optimization, and real-time analytics. Contribute to business development efforts, including proposal creation, thought leadership, and client engagements. Partner effectively with cross-functional teams to develop, operationalize, integrate, and scale new algorithmic products. Develop code, CI/CD, and MLOps pipelines, including automated tests, and deploy models to cloud compute endpoints. Manage cloud resources and build accelerators to enable other engineers, with experience in working across two hyperscale clouds. Demonstrate effective communication skills, coaching and leading junior engineers, with a successful track record of building production-grade AI products for large organizations. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

160.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About PwC: PricewaterhouseCoopers (PwC) is a leading global consulting firm. For more than 160 years, PwC has worked to build trust in society and solve important problems for clients and the communities in which we live and work. Today we have more than 276,000 people across 157 countries working towards this goal. The US Advisory Bangalore Acceleration Center is a natural extension of our United States based consulting capabilities, providing support to a broad range of practice teams. Our US-owned ACs are fully integrated into our client facing teams and are key to PwC's success in the marketplace. Job Summary: At PwC, we are betting big on data, analytics, and a digital revolution to transform the way deals are done. Analytics is increasingly a major driver of competitive advantages in deal-making, and value creation for private equity owned portfolio companies. PwC brings data-driven insights through advanced techniques to help clients make better strategic decisions, uncover value, and improve returns on their investments. The PwC Deal Analytics & Value Creation practice is a blend of deals and consulting professionals with diverse skills and backgrounds, including financial, commercial, operational, and data science. We support private equity and corporate clients across all phases of the deal lifecycle, including diligence, post-deal, and preparation for exit/divestiture. Our data-driven approach delivers insights in diligence at deal speed, works with clients to improve performance post-deal, and brings a commercial insights lens through third-party and alternative data to help inform decisions. A career in our fast-paced Deal Analytics & Value Creation practice, a business unit within the PwC deals platform, will allow you to work with top private equity and corporate clients across all sectors on complex and dynamic multi-billion-dollar decisions. Each client, deal, and situation is unique, and the ability to translate data into actionable insights for our clients is crucial to our continued success. Job Description As a Senior Associate, you'll work as part of a team of problem solvers, helping solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self-awareness, and personal strengths, and address development areas. Delegate to others to provide stretch opportunities, coaching them to deliver results. Demonstrate critical thinking and the ability to bring order to unstructured problems. Use a broad range of tools and techniques to extract insights from current industry or sector trends. Drive day-to-day deliverables in the team by helping in work planning and review your work and that of others for quality, accuracy, and relevance. Contribute to practice enablement and business development activities Learning new tools and technologies if required. Develop/Implement automation solutions and capabilities that are aligned to client's business requirements Know-how and when to use tools available for a given situation and can explain the reasons for this choice. Use straightforward communication, in a structured way, when influencing and connecting with others. Uphold the firm's code of ethics and business conduct. Preferred Fields Of Study/Experience Dual degree/Master's degree from reputed institutes in Data Science, Data Analytics, Finance, Accounting, Business Administration/Management, Economics, Statistics, Computer and Information Science, Management Information Systems, Engineering, Mathematics A total of 4-7 years of work experience in analytics consulting and/or transaction services with top consulting organizations Experience across the entire Deals Cycle (diligence, post-deal value creation, and exit preparation) Preferred Knowledge/Skills Our team is a blend of deals and consulting professionals with an ability to work with data and teams across our practice to bring targeted commercial and operational insights through industry-specific experience and cutting-edge techniques. We are looking for individuals who demonstrate knowledge and a proven record of success in one or both of the following areas: Business Experience in effectively facilitating day to day stakeholder interactions and relationships based in the US Experience working on high-performing teams preferably in data analytics, consulting, and /or private equity Strong Analytics Consulting experience with demonstrated ability to translate complex data into actionable insights Experience working with business frameworks to analyze markets and assess company position and performance Experience working with alternative data and market data sets to draw insight on competitive positioning and company performance Understanding of financial statements, business cycles (revenue, supply chain, etc.), business diligence, financial modeling, valuation, etc. Experience working in a dynamic, collaborative environment and working under time-sensitive client deadlines Provide insights by understanding the clients' businesses, their industry, and value drivers Strong communication and proven presentation skills Technical High degree of collaboration, ingenuity, and innovation to apply tools and techniques to address client questions Ability to synthesize insights and recommendations into a tight and cohesive presentation to clients Proven track record of data extraction/transformation, analytics, and visualization approaches and a high degree of data fluency Proven skills in the following preferred: Alteryx, Pyspark, Python, Advanced Excel, PowerBI (including visualization and DAX), MS Office Experience working on GenAI / Large language models (LLMs) is a good to have Experience in big data and machine learning concepts Strong track record with leveraging data and business intelligence software to turn data into insights Minimum Years Experience Required Add here AND change text color to black or remove bullet and section title if not applicable Additional Application Instructions Add here AND change text color to black or remove bullet and section title if not applicable Show more Show less

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 22 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

What we ask Experience: 2-6 years of experience in Data Engineering roles. Technical Skills: Proficiency in SQL, Python and Big data technologies (PySpark, Hive, Hadoop) Strong understanding of data pipeline Familiarity with data visualization tools. Good understanding of ETL pipelines Good experience in Data modelling Communication Skills: Ability to communicate complex technical concepts. Strong collaborative and team-oriented mindset We would be excited if you have Excellent communication and interpersonal skills Ability to meet deadlines and manage project delivery Excellent report-writing and presentation skills Critical thinking and problem-solving capabilities Whats in it for you? A Happy Workplace! We create an environment where everyone feels welcome and we are more than just co-workers, sharing an informal and fun workplace. Our teams are highly adaptive, and our dynamic culture pushes everyone to create success in all dimensions. Lucrative Packages and Perks At Indium we recognize your talent and offer competitive salaries better than the market standards. In addition to appraisals, rewards, and recognition programs conducted regularly, we have performance bonuses, sign-offs, and joining bonuses to value your contributions and success for yourself and Indium. Your Health is Priority for Us! A healthy and happy workforce is important for us, hence we ensure that you and your dependents are covered under our Medical Insurance Policy. From 1:1 counselling session for your mental well-being to fun filled fitness initiatives we ensure you stay healthy and happy! Skill Up to Scale Up We believe in continuous learning as part of our core values and hence we provide excellent training initiatives along with access to our mainspring learning platform, Indium Academy to ensure you keep yourself equipped with the necessary technical skills for greater success. Work-Life Balance With Flexi hybrid working culture and 5-day work week structure, and lots of fun destressing initiatives we create a positive and relaxed environment to work with!

Posted 1 week ago

Apply

2.0 - 5.5 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Role: Associate Tower: Data, Analytics & Specialist Managed Service Experience: 2.0 - 5.5 years Key Skills: AWS Educational Qualification: BE / B Tech / ME / M Tech / MBA Work Location: India.;l Job Description As a Associate, you will work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self-awareness, personal strengths, and address development areas. Flexible to work in stretch opportunities/assignments. Demonstrate critical thinking and the ability to bring order to unstructured problems. Ticket Quality and deliverables review, Status Reporting for the project. Adherence to SLAs, experience in incident management, change management and problem management. Seek and embrace opportunities which give exposure to different situations, environments, and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. Demonstrate leadership capabilities by working, with clients directly and leading the engagement. Work in a team environment that includes client interactions, workstream management, and cross-team collaboration. Good team player, take up cross competency work and contribute to COE activities. Escalation/Risk management. Position Requirements Required Skills: AWS Cloud Engineer Job description: Candidate is expected to demonstrate extensive knowledge and/or a proven record of success in the following areas: Should have minimum 2 years hand on experience building advanced Data warehousing solutions on leading cloud platforms. Should have minimum 1-3 years of Operate/Managed Services/Production Support Experience Should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines to ingest, store, collect, standardize, and integrate data that for downstream consumption like Business Intelligence systems, Analytics modelling, Data scientists etc. Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into data storage systems, such as data warehouses or data lakes. Should have experience in building efficient, ETL/ELT processes using industry leading tools like AWS, AWS GLUE, AWS Lambda, AWS DMS, PySpark, SQL, Python, DBT, Prefect, Snoflake, etc. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in AWS. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and troubleshooting data pipelines and resolving issues related to data processing, transformation, or storage. Implementing and maintaining data security and privacy measures, including access controls and encryption, to protect sensitive data Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Should have experience in Building and maintaining Data Governance solutions (Data Quality, Metadata management, Lineage, Master Data Management and Data security) using industry leading tools Scaling and optimizing schema and performance tuning SQL and ETL pipelines in data lake and data warehouse environments. Should have Hands-on experience with Data analytics tools like Informatica, Collibra, Hadoop, Spark, Snowflake etc. Should have Experience of ITIL processes like Incident management, Problem Management, Knowledge management, Release management, Data DevOps etc. Should have Strong communication, problem solving, quantitative and analytical abilities. Nice To Have AWS certification Managed Services- Data, Analytics & Insights Managed Service At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Every day we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our clients are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Data, Analytics & Insights where we focus more so on the evolution of our clients’ Data and Analytics ecosystem. Our focus is to empower our clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Data, Analytics & Insights Managed Service team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Data, Analytics & Insights offerings and engagement including help desk support, enhancement, and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective. Show more Show less

Posted 1 week ago

Apply

2.0 - 5.5 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Role: Associate Tower: Data, Analytics & Specialist Managed Service Experience: 2.0 - 5.5 years Key Skills: AWS Educational Qualification: BE / B Tech / ME / M Tech / MBA Work Location: India.;l Job Description As a Associate, you will work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self-awareness, personal strengths, and address development areas. Flexible to work in stretch opportunities/assignments. Demonstrate critical thinking and the ability to bring order to unstructured problems. Ticket Quality and deliverables review, Status Reporting for the project. Adherence to SLAs, experience in incident management, change management and problem management. Seek and embrace opportunities which give exposure to different situations, environments, and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. Demonstrate leadership capabilities by working, with clients directly and leading the engagement. Work in a team environment that includes client interactions, workstream management, and cross-team collaboration. Good team player, take up cross competency work and contribute to COE activities. Escalation/Risk management. Position Requirements Required Skills: AWS Cloud Engineer Job description: Candidate is expected to demonstrate extensive knowledge and/or a proven record of success in the following areas: Should have minimum 2 years hand on experience building advanced Data warehousing solutions on leading cloud platforms. Should have minimum 1-3 years of Operate/Managed Services/Production Support Experience Should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines to ingest, store, collect, standardize, and integrate data that for downstream consumption like Business Intelligence systems, Analytics modelling, Data scientists etc. Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into data storage systems, such as data warehouses or data lakes. Should have experience in building efficient, ETL/ELT processes using industry leading tools like AWS, AWS GLUE, AWS Lambda, AWS DMS, PySpark, SQL, Python, DBT, Prefect, Snoflake, etc. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in AWS. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and troubleshooting data pipelines and resolving issues related to data processing, transformation, or storage. Implementing and maintaining data security and privacy measures, including access controls and encryption, to protect sensitive data Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Should have experience in Building and maintaining Data Governance solutions (Data Quality, Metadata management, Lineage, Master Data Management and Data security) using industry leading tools Scaling and optimizing schema and performance tuning SQL and ETL pipelines in data lake and data warehouse environments. Should have Hands-on experience with Data analytics tools like Informatica, Collibra, Hadoop, Spark, Snowflake etc. Should have Experience of ITIL processes like Incident management, Problem Management, Knowledge management, Release management, Data DevOps etc. Should have Strong communication, problem solving, quantitative and analytical abilities. Nice To Have AWS certification Managed Services- Data, Analytics & Insights Managed Service At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Every day we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our clients are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Data, Analytics & Insights where we focus more so on the evolution of our clients’ Data and Analytics ecosystem. Our focus is to empower our clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Data, Analytics & Insights Managed Service team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Data, Analytics & Insights offerings and engagement including help desk support, enhancement, and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 32 Lacs

Bengaluru

Hybrid

Naukri logo

Job Title: Senior Data Engineer Experience: 9+ Years Location: Whitefield, Bangalore Notice Period: Serving or Immediate joiners. Role & Responsibilities: Design and implement scalable data pipelines for ingesting, transforming, and loading data from diverse sources and tools. Develop robust data models to support analytical and reporting requirements. Automate data engineering processes using appropriate scripting languages and frameworks. Collaborate with engineers, process managers, and data scientists to gather requirements and deliver effective data solutions. Serve as a liaison between engineering and business teams on all data-related initiatives. Automate monitoring and alerting for data pipelines, products, and dashboards; provide support for issue resolution including on-call responsibilities. Write optimized and modular SQL queries, including view and table creation as required. Define and implement best practices for data validation, ensuring alignment with enterprise standards. Manage QA data environments, including test data creation and maintenance. Qualifications: 9+ years of experience in data engineering or a related field. Proven experience with Agile software development practices. Strong SQL skills and experience working with both RDBMS and NoSQL databases. Hands-on experience with cloud-based data warehousing platforms such as Snowflake and Amazon Redshift . Proficiency with cloud technologies, preferably AWS . Deep knowledge of data modeling , data warehousing , and data lake concepts. Practical experience with ETL/ELT tools and frameworks. 5+ years of experience in application development using Python , SQL , Scala , or Java . Experience in working with real-time data streaming and associated platforms. Note: The professional should be based out of Bangalore, as one technical round has to be taken F2F from Bellandur, Bangalore office.

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position Overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

0 - 20 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Tech Mahindra in hiring for Azure Data Engineer. Roles and Responsibilities : Design, develop, test, deploy and maintain Azure Data Factory (ADF) pipelines for data integration and migration projects. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Develop complex SQL queries to extract insights from large datasets using PySpark on Azure Databricks. Troubleshoot issues related to ADF pipeline failures and optimize performance for improved efficiency. Job Requirements : Experience in IT Services & Consulting industry with expertise in ADF development. Strong understanding of Azure Data Lake Storage, Azure Data Factory, Azure Databricks, Python programming language, and SQL querying concepts. Experience working with big data technologies such as Hadoop ecosystem components including Hive, Pig, etc.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies